There is a lack of accurate and reliable mobile platforms for evaluating cognitive impairment in clinical and non-clinical settings. A tool to solve this problem could be used in many environments, but this project focused on a lab setting.
Other Team Members:
The team members for this project included students, a professor, and an external partner. I joined the project during its preliminary stages and was involved in all phases of the design process.
– Determine testable properties indicative of cognitive impairment
– Explore established tests that measure features of cognitive impairment
– Implement a mobile testing platform to administer cognitive impairment tests
1. Needs Analysis
The first step in our process was to conduct a needs analysis to understand the specific functions the primary users desired. During this stage we interviewed the primary stakeholders, as well as recorded and ranked the essential requirements. It was clear from the analysis that the future solution had to be mobile, handle a variety of cognitive tests, allow for data exports, and support psychology research testing methodology. Once these essential needs were established, we wanted to know more about the exact tasks that the system should support.
2. General Task Analysis
Although the basic functions of the system were determined, we needed more details before being able to create any designs. I worked with the primary stakeholders to break down each need into specific tasks. For example, the system needed to be able to fit into a psychology testing procedure. This meant that the system would have to gather demographic information and other data. The task analysis revealed that the system had to allow custom test orderings, demographic collection, data exporting, and the administration of multiple cognitive tests. We compiled the specific tasks from this stage of the design process, but also wanted a way to communicate the overall use case of the system. To visually meet that demand, we created a storyboard.
Creating a storyboard forced us to delineate the vision for the system and facilitated more discussion about the best user flow. Based on feedback from the primary stakeholders, we determined the order of an ideal user flow and were better able to visualize how all the task requirements could be met. This process revealed additional requirements, such as robust randomization options, that there were not previously mentioned. Although the storyboard was a high level overview of the system, it helped further the design and lead to the wireframing stage of our process.
After establishing the general flow of the system, we created a wireframe to ensure that the system aligned with the stakeholders’ visual and functional requirements. During this process we quickly realized that there would be difficulties with designing for randomized test ordering.
We decided to organize the tests into trials and blocks. A trial encompassed all the tests and their specific order as they would be presented to one testing participant. The system had to support the ability for a researcher to put the tests in a specific order, for them to be randomized, and for certain groups of tests to be randomized while others to be administered in a consistent order. This need led to the creation of a test block concept. A block of tests would be a group of tests. A user would be able to customize the order of the individual tests in a block or randomize the block’s tests. To help visualize this concept, see Figure 1 below. Supporting the blocking concept would meet the system requirements, but implementing the concept in an interface was the next challenge we faced.
With that concept in mind, we designed initial test creation screens that supported the trial, block, and test relationship. An early mockup listed the different test options on the left of the screen. To the right of each test were two text input fields that allowed users to enter a block number and the order the test should appear in the block. If a user left the ‘order’ text field blank, that test would be randomly administered in its designated block (See Figure 2).
Although this concept had the required functionality, it was unappealing and not very intuitive to use. Through multiple more cycles of design, feedback, and alterations we arrived at a solution worth being implemented as an interactive prototype.
5. Interactive Prototype
The interactive prototype allowed a user to create and run a trial, and view past testing data.
A user could design a trial by creating blocks of tests. The user would first select tests from a side menu to add to a test block. Multiple tests could be included in one block and the block could be given a name (See images in Figure 3). Each test and the block could be presented in a random or predefined order. This feature was controlled with the lock and shuffle icons displayed on the right of each block and test.
Figure 3. Test Creation Screens
Once the user completed designing the trial, he or she could review it and would have the option to immediately run or save it. In an ideal use case, the researcher would create a trial using the interface on an iPad and then hand the iPad to a research subject who would complete the tests. See Figure 4 for screenshots of reviewing a trial, demographic inputs, and a test with instructions.
Figure 4. Testing Screens
The interactive prototype also included instructional help, saved trial, and results review pages (See Figure 5).
Figure 5. Additional Features
6. Next Steps
The interactive prototype included all the required functions of the system, but was not tested with users. Future directions for this project include running usability tests, refining the prototype, and fully implementing the design with code.
I was able to work on all parts of creating a digital interface during this project from gathering user requirements, creating and refining a design, to preliminary coding. I look forward to using the skills I gained from this project in future work.