Laptop purchasing is an important, but overwhelming process. The plethora of options and varying sources of information can be paralyzing. This project focused on improving the experience of buying a laptop in store at Best Buy retail locations.
Other Team Members
I was on a team with three other Master’s students. We had a mix of educational backgrounds in computer science, psychology, and industrial design. Our diverse expertise were beneficial for creating solutions and implementing research.
– Determine specific problems people have while purchasing a laptop
– Explore external influences on the laptop buying process
– Delineate possible ways to improve the in store laptop shopping experience
– Evaluate a prototype designed to make Best Buy laptop purchasing simpler
– User researcher
Researching the Context
We explored the different ways that people buy laptops and the current typical purchasing channels. To narrow the scope of the project, we chose to focus on the in-store Best Buy environment. After making that decision we used a mix of interviews, observations, and task analyses to understand the existing purchasing flow. It was clear from this initial research that customers often struggle to make informed laptop purchasing decisions
Researching User Needs
To understand the pain points experienced while purchasing a laptop in store at Best Buy locations, we conducted a survey, multiple semi-structured interviews, and used topic modeling to analyze online review sentiments. We focused on the different steps consumers take while purchasing laptops and the specific issues that are encountered during each of those stages. By analyzing the results of that research we determined a few critical user needs including:
- Physical interaction with laptops
- A way to understand how specifications alter laptop performance
- Methods for limiting choices based on personal constraints (such as budget)
- A way to compare diverse options quickly
With the previously determined user needs in mind, we created diverse solution designs. First we outlined three different concepts as storyboards. Then we recruited participants who matched our primary user demographics and presented the storyboards by orally explaining each part of the designs. Participants gave feedback about each design. Below are example screens from each of the concepts:
The participants’ feedback was compiled and organized with an affinity diagram. That activity allowed us to see which parts of each design were valuable for the participants. We used that input in conjunction with the previously determined user needs to delineated which features had to be included in the final design.
The research indicated that the final design needed to include the following features:
– Informational dialogues
– Backtracking abilities
– Direct specification comparison screens
– A way to compare task performance
– Limited jargon and computer terms
– An installation-wizard style step based interface
With those essential features and the user needs in mind, we worked together to create a series of wireframes that integrated the best parts of each concept. We then conducted a series of feedback sessions with the wireframes. At this stage the wireframe screens were not interactive so the sessions were focused on gathering participants’ impressions of the system’s information architecture, features, and overall layout.
During each session we recorded participant’s comments, behaviors, and answers to scripted questions. Those responses were compiled and used to determine ways to refine the wireframes.
For example, numerous participants were confused by the nomenclature used on the form factor page. They also wanted the ability to read descriptive information about each form factor option. The red labels on the two images below denote places in the wireframes that required revisions based on the user feedback.
After the wireframes were revised to address the participants’ concerns, we created an interactive prototype using the improved wireframe images.
Interactive Prototype Design
The final interactive prototype was made using a combination of Sketch and MarvelApp. We used the feedback from the last stage of the project to update the wireframe designs in Sketch and used MarvelApp to add interaction capabilities. The completed interactive prototype had limitations, but it supported the main functions of the system. The limitations were taken into consideration during the evaluation and analysis stages.
There was also a part of our system that was not included in the final prototype. In an idealized full use case of our solution, a consumer would interact with software on a tablet and then at the appropriate time bring the tablet to a dedicated area in a Best Buy store that has digital display monitors. The user would wirelessly connect the tablet to the monitors and be able to continue the process of determining which laptop to buy (See Figure 2). Since this part of the system was not be feasible to build, we explained it to research participants using pictures and descriptions.
The software part of the solution was designed to guide users through a linear flow of questions with additional tools that would assist them while making a laptop purchase decision. The prototype contained all the parts of the tablet based app’s flow.
The application supported three main functions:
1) Collecting user preferences and usage type
2) Comparing laptops
3) Completing a virtual task comparison
Users accessed the first two functions of the interface by making selections on separate screens. Most of the interactions involved answering questions that caused the system to narrow laptop suggestions. There were also interface elements for navigating between pages, calling a salesperson, skipping a section and restarting the whole process (See Figure 3 for images of the prototype).
Figure 3. Prototype Screens
The most novel part of the prototype allowed users to select and control a virtual comparison feature. I suggested this concept during the ideation phase and helped implement it in the prototype. The virtual comparison would allow a user to select a task from a drop down menu and then see how two selected laptops would run that task in real-time. This would provide users with the ability to compare systems based on their individual needs without requiring technical knowledge.
Although the physical display portion of the virtual comparison feature was not implemented, we included an image of the envisioned display at the end of the prototype for explanation purposes (See Figure 4).
Interactive Prototype Design Evaluation
We used moderated in-person task based usability tests and expert heuristic reviews to evaluate the prototype.
During the moderated user testing sessions we had participants complete specific tasks, answer follow up questions, and respond to a few concluding questions that addressed overall thoughts about the prototype. The participants were told to complete each task, answer two associated scaled questions (measuring difficulty and confidence of completion), and then move on to the next task. We also instructed the participants to explain what they were thinking and expecting as they worked. This strategy allowed us to gather deeper qualitative feedback in addition to Likert scale data.
We recruited experts to conduct heuristic evaluations to identify basic usability issues with the prototype. This allowed us to get feedback from experts on all aspects of the design, even the elements that were not fully functional. We chose to use a selection of Jakob Nielsen’s 10 general principles of interaction design because they are an established set of heuristics that our experts were knowledgeable about.
For both testing methodologies we created printed handouts to assist the participants. The moderated usability test worksheet included a table listing tasks along with difficulty and confidence level Likert scales. The heuristic review worksheet included an instruction page and tables listing heuristic names, associated descriptions, and an area for feedback.
We gathered actionable design insights from the user tests and heuristic reviews. For example, Figure 5 displays the average confidence rating for each task. Confidence of task completion was rated on a scale from 1, meaning not at all confident, to 5, meaning very confident. Based on this information we determined which tasks caused participants to doubt their actions.
A similar chart was made for the task difficulty ratings with 1 representing very difficult and 5 meaning very easy (See Figure 6).
Figure 6. Average Difficulty Rating by TaskQualitative feedback was correlated with the difficulty and confidence rating data to uncover the tasks participants struggled with. We also calculated the average rating for each heuristic ranging from 1, very poor, to 5, excellent, as seen in Figure 7. Qualitative feedback from the expert reviews was also compiled.
Figure 7. Average Rating by Heuristic
Finally we analyzed all the qualitative and quantitate data to determine how the prototype could be improved. Some findings were specific to particular elements on the screens, while others were system wide. Below is a selected list of important and actionable feedback:
– The application was not very supportive of non-linear navigation
– The share function was not prominent enough
– Participants were unsure about the process of connecting and running the virtual comparison, but liked the concept
– Time estimates and step progress indicators would have been useful
– Certain terms, such as ‘Form’, were unfamiliar
– Being able to see previous responses would be beneficial
– Accessibility could be improved with less reliance on images, touch input, and fine motor skills
– Clean and easy to navigate, but clicking on the prototype had limitations
– Overall, very clear with linear progress and few choices
Although the project is complete, the actionable feedback from the usability and heuristic reviews could be implemented. Additionally, prototyping a physical display unit that represents the in-store experience would be useful for further testing the virtual comparison feature.
Overall this research and design project provided a positive experience and I learned many tools and strategies that will be applied to my future work.