overview.

ABOUT THE PROJECT

Identifying and ameliorating tasking challenges for in-store associates to improve the overall customer experience and work experience

CLIENT

The Home Depot Enterprise UX Team

TIMELINE

4 months

THE SOLUTION

Ask Homer, an app to connect the right store associates, at the right time, for rapid assistance, for any query

 

MY ROLE

UX Researcher

  1. Planned, selected, and conducted appropriate research methods
     

  2. Survey & Interview design
     

  3. Interpreted results into design parameters and recommendations

Product Manager

  1. Created meeting agendas &  timeline
     

  2. Team management & organization
     

  3. Identified the team's strong-suits and delegated tasks

The Historian

  1. Documentation: Defined file naming conventions, folder management
     

  2. Progress updates
     

  3. Note-taker for meetings and assessments

 

THE CHALLENGE

Create an innovative design built on a foundation of user-centered, 

evidence-based design to address tasking of store associates

PROBLEM SPACE

With about 80 - 300 associates, over 35K products, and about 20 departments in each Home Depot store, store associates are tasked with too many tasks which vary:

  • day-to-day

  • department-to-department

  • store-to-store

In addition to tasking, associates are required to always put the customers first.

 Therefore, there is a trade-off for associates to juggle: 

1.

Helping customers for ideal customer experience

2.

Managing and completing various tasks for efficient store/aisle area upkeep 

All while understanding the knowledge of what is required for each of these two components in their respective context

THE IMPACT

Simplifying ways to delegate, explain, and track tasks may generate more efficient Home Depot store associates and therefore save, increase, and optimize $$$ for Home Depot in the long run.

 

 

 

 

 

 

 

 

 

 

 

 

Reduce Cognitive Load

Reduce

Stressful Work Environment

Improve Associate

Work

Experience

Improve Employee Retention

Reduce Company Costs

Happy,

Efficient

Store

Associates

Increased 

Task

Completion

Cleaner

Bays,

Aisles,

Stores

Better

Customer

Experience

Increase Company

Revenue

PROJECT GOALS & MOTIVATIONS

The impact of this project should address the motivations behind tackling the issues of tasking for store associates:

  1. Simplify, improve, and innovate processes of associate tasking
     

  2. Create better efficiency in task management 

  3. Ensure higher quality task completion
     

  4. Create better practices & standardization to improve performance
     

  5. Create more accountability and task tracking
     

  6. Improve associate satisfaction

SUCCESS METRICS CHECKLIST

 

With the defined problem space, the impact, potential contributions, and project motivations identified, I defined what success entails for our project scope to keep in mind for our end design: 

Provide an outside perspective and explore potential opportunities in project motivations with user research

Addresses user wants & needs by targeting pain-points and challenges that store associates face in day-to-day: task delegation, management, and execution

A system that is designed for use at a large scale: used across several different departments of The Home Depot

THE PROCESS

User Evaluations

Brainstorming &

Design Thinking

Wireframing &

Prototyping

Identifying User

Pain Points 

Stakeholder Interviews &

Competitive Analysis

 

research.

OVERVIEW

As the primary UX Researcher in the project, I devised the research action plan by selecting appropriate methods to address our research questions to investigate pain points, who should be included in our sample, and an interview script. In addition, I liaised with our corporate point of contact for scheduling, created a timeline of who, what, when, and where, and recorded/took notes during interviews for documentation.

I believed that the first step was to clearly identify the problem space, identify our goals, and understand the motivations behind the project. Therefore, from Stakeholder Interviews, we identified the general foundation about the problem space, the impact it would make, success metrics, and project motivations that can be found in the challenge section. With this information, we defined a research question, which became the foundation on how I planned our research process to investigate user pain points. Once this information was obtained and there was a better understanding of the environment, process, and context, I created design implications and parameters for the design. 

THE RESEARCH QUESTION

How can we delegate, track, and ensure tasks are efficiently and effectively completed on a daily basis to create ideal customer experiences?

 

To provide any solution or start ideating on opportunities for design, our next step was to conduct user research to understand the problem space of tasking, defined by our research question.

TARGET AUDIENCE & PARTICIPANTS

Our sample included associates with variable ages, experiences, skill levels, and different departments to more accurately represent all types of associates working in The Home Depot Stores.

 

These are the main users of our end design who are in charge of day-to-day activities.

THE NUMBERS

2  Home Depot Stores (Kennesaw & Midtown)

6  Store Visits 

3  Stakeholder Interviews

3  Interviews 

8  Contextual Inquiries

2  Competitive Analysis at Lowes & Target

1
2
3

RESEARCHING THE

ENVIRONMENT

RESEARCHING THE

PROCESS

RESEARCHING THE

CONTEXT

First look into the world of Home Depot

Inside scoop with the Chain of Command

Be the apprentice to the frontline

Naturalistic Observations

While waiting for approval to get on the field with store associates, the research team wanted to get familiar with
The Home Depot environment before interviewing store associates to:

  • develop a more intuitive understanding of the work culture

  • construct more insightful questions to ask store associates during interviews

  • compare what store associates actually do vs. what corporate thinks they do

Semi-Structured Interviews

We interviewed Store Managers and Product Managers to build rapport with users and capture attitudes and opinions that questionnaires or observations cannot capture. Our goal is to understand at a higher level:

 

  • the process of how tasks are managed, delegated, tracked, and selected

  • feedback loops of tasking

  • communication

  • general attitudes about systems and tools that are used

Contextual Inquiries

This method gave us the ability to also build rapport but to ask questions while associates complete tasks in their complete social context to obtain rich data, that interviews lack, to help us understand more granularly:

  • accuracy and complexity of tasks and behaviors

  • variability and differences from associate to associate in different departments

  • the behaviors and motivations surrounding these tasking process

  • observe associate interactions and potential ‘distractions’ in real-time

ARTIFACTS &
DOCUMENT MINING

Here are photos of a few artifacts and photos that I snagged while conducting contextual inquiries with store associates.

 

Unfortunately, due to liability/confidentiality and lack of document logging to track how tasks are completed, we were unsuccessful in gathering further information.

INTERVIEW SCRIPT

Based on our problem space and overall research question, the team created more granular research questions. I identified the best research methods to use and devised an interview script. 

High Level Questions:

  • How are tasks decided, delegated, and selected?

  • What methods/tools are used to track associate activities?

  • What tool(s) do associates use when completing tasking?

  • What are major tasks that associates do on a daily basis?

  • How do associates complete tasks?

  • What circumstances require associates, supervisors, and managers to communicate?

  • How does communicate take place?

  • What are the general attitudes/opinions for pain & pleasure points​?

During interviews, I used these questions as a guide but had the freedom to ask more in depth questions. Though this method is less structured and yields less consistent data, it provided rich information from associates about the problem space. After each interview and contextual inquiry, I revised the script to add more questions for associates to clarify information that I was unable to understand proficiently because of time constraints. After each session, I debriefed with the team to see how we can move forward for the next sessions. 

COMPETITIVE ANALYSIS

 

Our team conducted a competitive analysis to compare The Home Depot with similar, competitor stores and help understand what other companies do in terms of tasking. With this information, we analyzed strengths, weaknesses, similarities, and between companies

 

AFFINITY MAPPING

USER PAIN POINTS

  1. DESIGN IMPLICATIONS

  2. PERSONAS

  3. EMPATHY MAPS

OVERVIEW

 

 

 

 

Handwritten notes and audio recordings from interviews, contextual inquiries, and competitive were transcribed. The team created a digital affinity map to categorize, organize, and group information from each of the user research methods to aid in brainstorming phase for ideation as well as to get the team on the same page.

From this information, I wrote a report on Details on Context of Store Associates that included an overview of tasks and responsibilities with flow diagrams, requirements to perform tasks, work environment, training, communication, equipment and tools that the team used as a reference to as we created design implications, empathy maps, and personas.

 

 

 

 

RESEARCH INSIGHTS

A total of 19 pain points were uncovered from our user research. Each pain point describes an aspect of system or user behavior that fails to support a balanced need of associates, managers, corporate, or customer stakeholders.

analysis.

 PSSTTT JUST A TIP!  
Please use the arrow to scroll through the pain points below. 
You may also click to view an enlarged image or 

download a PDF by clicking here.

SETTING THE STAGE

Based on what we learned and synthesized our research with affinity mapping, I created a design implication table to ensure that our design solution accounts for the perceptual, cognitive, motivation, physical environment, and social attributes for the variety of associates that considered skills, training/domain knowledge, and environment. 

 

As a team, we created personas and empathy maps  to represent different groups of users by advocating for each of the users that we individually talked to. This method is a fairly quick and cost efficient way to synthesize data to better understand motivations, behaviors, and needs of our user types to aid in design decisions.

 

Using these three methods, we put ourselves in our users' shoes to begin the next phase of brainstorming, design thinking, and ideating.

DESIGN IMPLICATION TABLE

PERSONAS AND EMPATHY MAPS 
 

 

design.

OVERVIEW

Our goal is to translate user needs and design criteria into sketched concepts, wireframes, and prototype description for evaluation. 

 

However, to create and improve our system, we obtained feedback from:

  1. CORPORATE STAKEHOLDERS to compare, critique and narrow down system-level concepts​​

  2. USERS to understand clarity, benefits, feasibility, and preference in order to implement improvements to the prototype

Our team used this feedback to uncover weaknesses, blind spots, understand critiques, and identify recommendations for improvements before the final prototype.

 

BRAINSTORMING & DESIGN THINKING

From the 19 pain points, we decided on 5 pain points to tackle by systematically sorting into groups: 

  1. feasibility vs. opportunities for innovation

  2. client preference, project motivation, and success metrics

  3. strong evidence from user research

My goal was to be the liaison (bread & butter) between company goals and what the people (users) want by designing solutions that target multiple pain-points to reduce current efforts, increase pleasure, and optimize cost. It's a win-win for everyone!

RAPID IDEATION 

We rapidly generated ~50 high-level design ideas within parameters created by the pain-points.

Three concept amalgamations were then selected by the team for a storyboarding and stakeholder feedback.

OVERVIEW OF THREE CONCEPTS FOR FEEDBACK

CONCEPT

1

CONCEPT

ONE

CONCEPT

2

CONCEPT

2

CONCEPT

3

FEEDBACK SESSION

#1

WIREFRAME

2

SPOILER 

ALERT!

FEEDBACK SESSION

#2

PROTOTYPE

CONCEPT #1: TASK CAPTURE

 

Task Capture is a high-level interface concept meant to provide associates and managers a platform to capture, describe, and post tasks in an efficient and systematic fashion. Tasks are contained by information cards that holds the task’s urgency level, verbal description, tagged departments, tagged experts needed, and other supporting references such as photos. Cards are organized and displayed in scrollable feeds that can be filtered by recency, location, and assignments.

STORYBOARD

Task Delegation is Non-Uniform Process:
 

Managers delegate tasks using a variety of methods including pen and paper, text messages, and others. Managers use a mix of unguided and guided tasking systems: both are non-uniform in execution.

USER PAIN POINTS TO SUPPORT DESIGN

Task Identification is Non-Uniform Process:

Looking for bay issues is a manual process carried out by managers and supervisors "walking the floor" or associates noticing things off-hand. This can lead to certain issues being unnoticed and addressed because associates are busy multitasking attending to customers and other bay-related tasks.

Task Recording and Planning Leads to Cognitive Overload:

 

Communication of tasks is mostly verbal, and associates have to figure out their own ways of remembering and planning what to do. This can lead to cognitive load, and missed tasks.

Task Follow-up is rare:

 

Managers want to issue tasks to their associates and know the status of the task completion. Current systems and personnel habits do not allow for this.

CONCEPT #2: EXPERT DIRECTORY

Expert Directory is a concept created to bridge the experience gap between junior associates and senior associates/department experts. Junior associates send inquiries to an expert directory through voice or text, and the system responds by connecting them to nearby associates with the expertise needed to answer their question. The Expert Directory is built upon an associate profile syste, where associates are tagged with expertise in a in a department, equipment, or certification. The Directory monitors current expertise available through an associate attendance system and cross references inquiries for expertise and connect the right people where help is needed.

Experts Associates are Hard to Find:

 

Managers/associates sometimes need experts to man departments or help with an urgent tasks, but don't know who has the expertise or who is available. This especially applies to new associates. It's a potential for lost sales as customers get frustrated.

Lack of Cross Training:

Departments that are swarmed with customers needing assistance are left understaffed because only a small amount of associates are trained to help with that given department - other associates couldn't help even if they wanted to.

New Associates Need More Ways to Learn:

 

Associates need to learn about their department on the job, where products are, and about the products themselves.

Learning Associates are Very Dependent on Supervisors or Specialists:

 

New associates or associates new to a department are reliant on their supervisor to get tasks and instructions. When they are not available though, it can be challenging.

STORYBOARD

USER PAIN POINTS TO SUPPORT DESIGN

STORYBOARD

CONCEPT #3:

DEPARTMENT WAYPOINT

Department Waypoint is a concept that attempts to eliminate the need for customers and associates to search for each other within the “maze” of Home Depot’s warehouse like stores with endless aisles and bays. A Waypoint is an informational kiosk that customers and associates can utilize. Waypoints are intended to be installed within major departments of Home Depot.

USER PAIN POINTS

THAT SUPPORT DESIGN

Unmanned Departments

Frustrate Customers:

 

Unmanned departments are a critical point of failure associated with customer experience. Associates and managers are sometimes unaware of department staffing gaps.

Experts Associates are Hard to Find:

 

Managers/associates sometimes need experts to man departments or help with an urgent tasks, but don't know who has the expertise or who is available. This especially applies to new associates. It's a potential for lost sales as customers get frustrated.

FEEDBACK SESSIONS

A report with more detail of the feedback sessions can be found here.

FEEDBACK SESSION #1

Compare, critique, and narrow down proposed solutions​​ with the most potential impact in terms of utility and feasibility for store associates by utilizing experts 

OVERVIEW

PARTICIPANTS

Corporate Stakeholders: 

  • UX Designers

  • Product Manager

  • Store Operations Manager with in-store experience

Selected concept was then fleshed out in more detail through narrative walkthroughs and wireframing from feedback

NEXT STEPS

METHODS

Questionnaire

To gather and gauge (in a confidential manner to reduce response bias) evaluation criteria to quantify qualitative opinions and evaluation criteria such as effectiveness, utility, feasibility, strengths, weaknesses, and opportunities for analysis

Question Types

  • Likert Scale

  • Open-ended questions

Note: Two forms were provided ~ electronic or paper

Interview to Focus Group

To easily explain system concepts by answering participants' questions in real-time, ask participants in depth questions and to elaborate their answers in detail, ​help the researchers gauge non-verbal responses. This method compensates for qualitative information that the questionnaire method lacked.
 

CHANGE OF PLANS: When we arrived at the Corporate Office, we were double-booked with another team and half the participants left. Therefore, we compromised and adjusted our plan to conduct two small focus groups

PROCEDURE

1. Session Facilitator explained concept one by one

2. As the Session Moderator, I asked participants to complete the questionnaire on their own after each concept​

3.  Then, I verbally asked participants open-ended questions so participants can provide more contextual feedback and information about their opinion on the system

  • ​Note: Participants were allowed to record on their forms for more confidentiality if they felt uncomfortable to discuss with the team, but were also able to discuss with the group about their opinion
     

4. Notetakers took notes and recorded the session (with permission) for later interpretation

2

1

ANALYSIS & RESULTS 

 

With the quantitative data from the questionnaire, I coded the Likert Scale from 1-5 and took averages of answers for each question, for each concept from the seven participants.

 

The results demonstrated Concept Two: Expert Directory had the highest average for the majority of the evaluation criteria which also corroborates with the qualitative data where 7/7 participants preferred Concept Two. 

Concept Two was the clear winner!

Amazing! Expert Directory can really solve major problems and avoid loss of sales. 

 

Saves unnecessary waste of time for 
associates in finding other associates.

RECOMMENDATIONS

From analyzing the qualitative data from the focus groups, we created an improvement and recommendations table that addresses concerns and limitations of the design for Concept Two. With this information, we created recommendations that can turn into actionable solutions for our wireframe before prototyping.

 

THE GRAND PIVOT

Narrowing our scope: It is a hurdle for associates to help customers, as well as manage various kinds of tasks. No single associate can know everything and anything about all the products and processes in the complex environment of The Home Depot.

REFRAMING THE RESEARCH QUESTION:

How can we get associates and customers expert help when and where they need it in store?

WIREFRAME ITERATIONS

Based on Feedback Session #1, I reframed our research question since we decided to pivot.

Our team created a wireframe from three iterations with two different scenarios with recommendations and keeping this pivot in mind.

Each iteration, our UX Designer would walk us through his wireframes and the rest of the team would provide verbal feedback or add sticky notes on what we believe needs to be changed.

TASK SCEANRIOS

 

From our research, two common scenarios occur in the day-to-day Home Depot environment.

 

  1. Associates helping
    customers
     

  2. Associates helping other associates

 

Therefore, our team developed two scenarios to prototype that are familiar to associates.

USER INTERACTION
PATH TWO: DIRECTORY

7. From same screen, click on "Directory"

8. Tap "Gardening"

9. Tap "Select all"

10. Tap "Reach out"

11. Tap on "Send" to end the message

12. Tap on "Camera" to add an image

13. Wait for response, then tap "Mark as resolved"

14. Tap on "back" icon to see all threads.

USER INTERACTION
PATH ONE: VOICE

1. Start the app

2. Tap the mic

3. Say "Customer needs help"

4. Say "Aisle 32"

5. Tap on "Reach out"

6. Wait and press "Mark as resolved"

WIREFRAME & NARRATIVE WALKTHROUGH

 

OVERVIEW

The prototype is based on our second concept, “Expert Directory.” The initial concept relied on a database of associates with skills to answer questions and assist inexperienced associates when they needed help. The prototype is built using Adobe XD.

 

Wireframes will be updated based on feedback received during the second feedback session. It will be designed to be a T-shaped prototype with two user flows completely detailed out: one using the chatbot assistant and the other using the directory. The chosen paths were detailed in a click-through prototype for ease of user-testing.

FEEDBACK SESSION #2

Improve the system-level concept “Expert Directory” to uncover potential problems, issues, and recommendations with Store Managers

OVERVIEW

PARTICIPANTS

Corporate Stakeholder

NOTE: Due to time constraints and lack of rersponse, we were unable to arrange further feedback

Implement improvements for the prototype based on recommendations from feedback sessions

NEXT STEPS

METHODS

Remote Semi-Structured Interviews

Based on time constraints, it can be more convenient to conduct remote interviews to receive feedback from participants. This method would still provide participants context to the information presented and provide context of participants’ opinions for the researchers for feedback. Participants are free to ask questions about understanding the content.

1

QUESTIONS
 

What do you think of the clarity and flow of the design?

What are the key benefits offered by this system design? Why?

What do you think are the potential weaknesses or limitations of the system? Why?

Do you have any recommendations and improvements?

Any last thoughts you would like to include about the design?

PROCEDURE

I scheduled a remote feedback session a corporate stakeholder, via Google Hangouts. I sent the wireframe and narrative walk through to the participant via email in advance. The Session Facilitator described, elaborated the system features in detail, and answered questions with the participant.

 

Then, as the Session Moderator, I verbally asked questions to the participant and wrote down notes. Since it was a semi-structured interview, the I was able to ask further questions from the user but used the questions created as a guide.

It gives the user freedom of approach and strategy to contact others.

ANALYSIS & RESULTS

The team divided the feedback into sections: general feedback, issues, and recommendations.

 

The team created a table for Issues and Recommendations by parsing the negative feedback and recommendations. Using this table, we created actionable solutions that addresses limitations and identified system-level and interface-level problems.

ACTIONABLE MODIFICATIONS

 

  1. ​Replace Resolve with “Mark as resolved” and “Send reminder” options

  2. Add priority levels in thread

  3. Make location a follow-up suggestion from the chatbot

  4. Create screen for “My threads”

  5. Add a call button inside the thread/profile page

*Please click to see the images below that correlate with the modifications above

RECOMMENDATIONS

Based on the Issues and Recommendations Table created from the second feedback session, the team created a list of actionable modifications to the wireframe. These will directly be implemented in the next iteration of the prototype.

 

prototype.

OVERVIEW

The team designed a solution for the major pain point of associates and customers not being able to get help on time. The idea is to create an application for the Home Depot First Phone, “Ask Homer”.

 

PROTOTYPE TECHNICAL DETAILS

The prototype was built using the Adobe XD software. XD was chosen for 3 primary reasons:

  1. Native ability for voice-based interaction (input and output)

  2. Ability to design and create a click-through prototype in the same software

  3. Ability to remotely share to test on phones as well as browsers without installation

 

Building on the two scenarios created during the wireframe session, this prototype creates a click-through experience for both paths. Since creating an open-ended voice prototype is technically very challenging, the prototype is restricted to only one path for the voice input with constructed commands such as “Aisle 32” and “Customer needs help with cabinet plywood.  These commands need to be specified to any users of the prototype in order for them to be able to voice interact with the system.

 

There is an accessibility bypass created to simplify voice interaction if using the prototype on web browsers, or if someone doesn’t want to use voice. Tapping on the “listening” waveform graphic lets prototype users get to the next screen. Please note that this is a prototype implementation and not a design implementation.

 

The prototype is publicly viewable here.

PROTOTYPE FEATURES DRIVING DESIGN

usability testing.

 

Designers are NOT users.

OVERVIEW

 

The team conducted usability evaluations to understand issues and problems that users have with the system to identify components that are unclear and confusing. Once identifying what these issues are, we recommended improvements.

PHASE ONE


With users, our goal was to uncover potential problems, identify strengths, weaknesses, pain points, and pleasure points with moderated think alouds and questionnaires.

PHASE TWO
 

With experts, our goal was to examine the UI and identify pain points, strengths, and weaknesses of the system with Nielsen's Ten Usability Heuristic to evaluate the system.

NEXT STEPS

 

With both of these phases in our Usability Evaluation, the team analyzed and synthesized the results. From the results and feedback, we created design recommendations for implementing changes and improvements to the design.

PHASE ONE METHODS

PARTICIPANTS

6 Home Depot In-Store Associates and UX Designers

LIMITATIONS

Having a few UX Designers in our sample may bias the data since they have more domain knowledge in app development. Designers are not users.

Moderated Think Aloud

Moderated usability evaluations are best to ensure users complete tasks and continually verbalize their thoughts during the think aloud. Think alouds demonstrate how users interact with the system while participants provide their thoughts in real-time. Researchers can ask more in-depth questions to understand reasons behind users behavior, opinions, cognitive processes, and reasons for action, and their point of view. Opinions are more difficult to understand and capture with only a questionnaire.

LIMITATIONS:

  • Time consuming

  • Not quantifiable - cannot combine with quantitative time measurements because people thinking aloud can hinder time performance

  • Monologue for participant sometimes unnatural to continue and maintain

  • Social desirability bias

  • Verbalizing thoughts can distract users from their actual task

  • Moderator bias - untrained or poor facilitators can very easily change user behavior

1

System Usability Scale Questionnaire

 

I used Survey Monkey for our questionnaire - found here.The post-task questionnaire with the System Usability Scale is a valid and reliable measure that can “effectively differentiate between usable and unusable systems” with smaller sample sizes. Not only is it very easy to administer, but it also provides quantitative results that the think aloud method lacks. The goal was to understand how usable our system was and then next steps for future testing would be to understand specific component and features.

 

LIMITATIONS:

  • Acquiescence bias

  • Inability to elaborate in depth their answers (though there was an option to add comments at the end)

  • Difficult to convey emotion and thoughts

  • Differences in interpretation of questions

  • Potential accessibility issues

  • Scores are slightly more difficult to interpret 

2

PROCEDURE

Roles

Facilitator: Akhil

Moderator: Sam and Alexandra

Note-Takers: Sam and Alexandra
 

  1. ​The facilitator obtained verbal consent

  2. The facilitator described the procedure for both the think aloud and questionnaire

  3. The facilitator guided the participant through both guided task scenarios of the app, screen-by-screen asking and prompting participants to verbalize their thoughts and concerns

  4. The participant went through each screen, thinking aloud

  5. The facilitator would answer questions that participants had during the process while two Notetakers (for reliability) audio recorded the session and wrote notes

  6. Moderators would prompt specific questions about particular features of the app that they found confusing from previous participants

  7. Participants were given a printed questionnaire, post-task to fill out

​​

PHASE TWO METHODS

PARTICIPANTS

3 Usability Experts and UX Professionals

LIMITATIONS

Heuristic evaluation can only be conducted by users who have knowledge and experience with applying heuristics. Experts are expensive and difficult to recruit. However, it will save time to make modifications early with few experts than later with a lot of users.

LIMITATIONS:

 

Limitations of this method is that it requires users to have domain knowledge and experience in heuristics to be used effectively, can be expensive to get experts, need multiple experts to aggregate results, evaluation may only identify minor issues

I conducted Heuristic Evaluations with usability experts to identify system-level pain points, evaluate usability of the prototype, identify strengths, weaknesses, and design recommendations for future prototypes. This provides quick feedback that can be used early in the iterative design process and can be used with other usability methodologies.

Essentially, this phase would be to ensure that the prototype is ready for testing with users and help predict issues users may run into in the next phase of testing based on heuristics. Then, we would make modifications or adjustments to the system before taking them to users.

Heuristic Evaluations

1

PROCEDURE OVERVIEW

After conducting our first Heuristic Evaluation (HE), I noticed that there were a few limitations in our procedure. Therefore, I implemented changes and improvements from our first Heuristic Evaluation before conducting our second and third evaluations with classmates.

LESSONS LEARNED AND LIMITATIONS OF PROCEDURE:

After the first HE , I realized that I wanted quantitative data with Nielsen's Heuristics and a location to indicate where the usability issues occur for easy reference and easier analysis for design recommendations. Therefore, I adjusted the form to include a rating scale of 1 - 5 from least to highest and a column for participants to indicate the screen number and location.

 

Furthermore, since the app was only designed with two users flows for the two task scenarios, not all of the functionalities were fully developed. I noticed during our usability testing that many users tried clicking multiple buttons that were not programmed in the prototype that confused many users. In addition, since we had a facilitator guide them through the process, there could have been moderator bias. Questions asked by participants were also not documented (aside from audio recordings) Therefore, I decided that it is best to print out our prototype, screen-by-screen so users can write their initial questions that they had on each screen to document the feedback. I also decided that they would go through the system by themselves first to reduce bias

FIRST Heuristic Evaluation form here:
 

Facilitator: Darsh

A participant (UX expert) was accompanied by a facilitator who explained parts of the system and procedures if necessary, since it was a not fully-developed prototype.

 

We provided the prototype app on a cellphone.
 

  1. The facilitator obtained verbal consent

  2. The facilitator described the procedure

  3. Participants were given a form with all ten Nielsen Heuristics and descriptions to remind them of the heuristics. They had time to read them before beginning

  4. The facilitator debriefed the participant through both guided task scenarios of the app

  5. As the facilitator went through the app screen-by-screen with the participant. The participant was able to ask questions to the facilitator during this process.

  6. After the participant was guided through the app by the facilitator, the participant was able to explore the app freely

  7. The expert individually evaluated the heuristics for both of the two guided task scenarios of the system and added comments for difficulties/strengths on the evaluation form provided

IMPROVED Heuristic Evaluation form here:

Facilitator: Alexandra
 

A participant (UX expert) was accompanied by a facilitator who explained parts of the system and procedures if necessary, since it was a not fully-developed prototype.

 

We provided screen-by-screen printout for the participant.​​

  1. ​The facilitator obtained verbal consent

  2. The facilitator described the procedure

  3. Participants were given an updated form with all ten Nielsen Heuristics and descriptions to remind them of the heuristics, a rating scale from 1-5, a column to indicate where they noticed the usability issue, and a column to add comments

  4. The facilitator debriefed scenario to the participant for both guided task scenarios of the app

  5. On their own, the participant went through each screen freely, adding comments and questions to ask later

  6. After the participant was done, the facilitator answered questions of the participant

  7. The expert individually evaluated the heuristics for both of the two guided task scenarios of the system, added comments, indicated locations of issues, and scored the heuristic on the evaluation form provided

ANALYSIS & RESULTS