On contract - Introducing the product team to working with UX research (credible methodologies) in an Empathy-driven culture, and conducting UX Research within a politically charged UX group.
ROLE
Contracted to provide research leadership and guidance to the product team's new product suite initiative.
PROJECT OBJECTIVE
A newly assigned Product Manager and newly assigned/hired Designer started their engagement with their product - a customer portal, by applying best practices and contemporary design patterns intended to solve some existing customer complaints - addressing usability issues with heuristics.
The team would like to know whether they have addressed those issues and identify any critical issues to consider in the near future.
CHALLENGES
The Product Manager and Designer were new to the Product, and the Designer was newly hired into the company. They had both sorted through customer feedback and complaints as well as performance data. 
The product was originally designed by the technical team. They had focused on keeping all interactions within the screen without scrolling.
The Product Manager and Designer are not experienced with Usability Testing and not familiar with Usability Heuristics but are familiar with contemporary Design Patterns and Design Principles.
This was my last project on the contract and time was limited.
CONSIDERATIONS
The Designer and Product Manager are in the UK and do not wish to work collaboratively beyond planning and expect a robust report - per the Product Manager.
The team is based in the UK and I am in Southern California.
Many of the same challenges exist with Fleets in the UK however Fleet Management has different roles, and the UK and EU define vehicle classes and company sizes differently from the US. This means I need to engage the team to come up with an effective screening.
APPROACH
I met with the Team and we discussed their objectives and reviewed the changes they made to address specific complaints vs just best practices. I crafted a Plan and Guide for their review where they were invited to comment and propose changes via Google Drive, and scheduled a discussion to review a few days later.

I document studies not only for the current team to ensure we are all on the same page, but also document for other researchers so they understand the study to not make the same mistakes and understand its strength.

I used the UX Research team's templates which work great.

The Study - Task-Based Usability Testing
The team had a set of use cases they wanted to ensure the usability of the product in terms of efficiency and effectiveness of the user tasks. I explained how we would test the usability of the use cases by testing user tasks to evaluate navigation (one of their primary questions). I noted that user tasks are a series of use cases that need to work cohesively together. I drafted testing scenarios mapped to use cases to review with the Product Manager and Designer, and then again with the Designer walking through the prototype and discuss what updates are needed to the prototype for successful testing. 
The protocols included repeating a user task (collection of use cases) with a different scenario to measure the learnability of the primary navigation. The second time the task is completed (with a different scenario), the user ideally completes the task more quickly with less errors including knowing where to navigate.

The slide from the report outlining the final methodology. I harnessed UCD protocols as learned from Ginny Redish's book -A Practical Guide to Usability Testing, and Morten Hertzum's Usability Testing: A Practitioner's Guide to Evaluating the User Experience

The Prototype
I made space for the Designer to make the decisions about the prototype and worked with him to address the critical gaps in supporting the testing scenarios. As we normally work, the prototype was conceptual - not the complete product, but the primary navigation and the pages for the task scenarios (sometimes with more than one path) were available. The content and functionality of the pages were limited but sufficient to get a reading. This is never a problem. I explained that the protocols include prepping participants for this. Participants are instructed that this is a conceptual prototype and not all functionality or content is available. And we need them to explain -out loud, what they expect when functionality or content is not available to them to complete their task. This allowed for sufficient feedback for our questions and objectives, and allows us to discover mental models.
The Screener
As noted before, this is a different market therefore the screeners I have developed for other studies can't be reused completely, but they help guide me in asking the team questions to develop new ones... and informing my Googling to understand the market. I drafted a screener as part of the Plan. I reviewed a draft with the team to refine and finalize. This included explaining the purpose of screening participants - and each screening question. The questions help us screen out fakers, find the defined representative participants, and provide data for analysis.
The Unmoderated Sessions + Protocols
Using UserTesting.com, sessions were unmoderated using a cadence of asking an Interview question - about the scenario they were about to complete. It probed whether it was part of their work and understanding what their scenario of use was for the task. The focus was on the frequency, priority, motivation, and expectations of the company and the participant, and what they need. They were then prompted to complete that task using the prototype, talking-outloud. After each task, participants were asked to rate facets of Effectiveness (Complete and Accurate) and Efficiency (Number of Steps and Time to Complete) of their task on a 5-point Likert scale (UserTesting.com feature) to represent their perceptions of usability.
A few participants were distracted (mild and moderate) by the conceptual prototype. However, the navigation and understanding of the interface and features were clear and sufficient through observations of how they approached their tasks.
Analysis
I watched the sessions, creating video clips for reels of issues, successes, and reoccurring themes, as well as for each interview question and then each task to create views for the team's participation. 
I create a reel for each interview question and corresponding task together to help save the team time and increase the likelihood they will watch all participants complete a task rather than just watching one participant's session and not gaining a sense of the variations in performance.

I design my reports to orient my stakeholders by beginning with their mental model. In this case, I started with their primary intention of the tool and the design refresh. I followed that with the successful changes.

The protocols included "Task 0". I typically invite participants to get comfortable with the prototype noting it is conceptual and not all screens and functions are available. This includes the instruction to practice talking out loud as they explore the new product and explain what they are expecting if they encounter a desired function or screen is not available. This also allows us to observe what they naturally gravitate to.

At the end of testing, the protocols included an invitation for feedback about the dashboard and what they would expect to see on the main screen. In this case, they were very intertwined.

An overview of the testing scenarios to give the team a sense of the overall usability scoring. Performance is the count of completed task. Perception is the ratings of participants. The repeated task (learnability) on line 3 did not include probes of perception.

A  Usability Snapshot for each user task complete with reels, applicable heuristics, and the mapping to use cases. Note the contrast between the moderate success of the "Address" lost card/request new card scenario performance vs perceptions. 

A discussion of applicable usability heuristics relating to multiple issues.

An example of empowering the team to understand usability issues. Explaining the poor navigation within a screen with the applicable usability heuristics.

OUTCOMES
This was a whirlwind project completed in 2 weeks knowing the report would be "thrown over the fence".  
Although the team was primarily interested in confirming they would address priority customer complaints, I knew the critical issues needed to be documented well and empower the team to understand the users' challenges so they could develop potential workarounds to reduce critical issues to moderate since the navigation issues called for a new navigation schema - a significant redesign/technical effort.
Although I did my best to document the findings and empower the team to understand the findings and make decisions, I am left feeling incomplete due to the inability to work with them through the issues, but trust their abilities. 
Back to Top