Product Benchmarking for Retailer
About the project
A product team a large electronics retailer wanted to answer the following questions:
Does the customer order fulfillment app meet the needs of our retail employees?
What pain points exist and how might we track improvements to the app?
My research colleague and I conducted this benchmarking study in order to:
set a baseline of the employee experience with the customer order fulfillment app
measure the impact of app changes going forward
make research-informed decisions when prioritizing work
build a participant list of future research volunteers
The Challenges
Complicated analysis due to large project scope
The customer order fulfillment app has 12 core functions that allow employees to complete tasks in the warehouse. In order to set a baseline, we wanted to measure all the core functions side-by-side in one survey.
Considering we create a survey with multiple open-ended questions, there was a lot of feedback to sort through. However, my colleague and I were able to create a workflow that helped us manage the large influx of responses.
Navigating the app ecosystem
Retail employees use additional applications on both mobile and desktop devices to help them complete their tasks. Sorting through survey feedback about a complex app ecosystem required my colleague and I to connect with other teams to understand how employees use those tools and how we can make that experience more seamless.
Methodology
UserZoom survey that fielded nearly 1000 employee responses from U.S. retail locations
Research Process
Phase 1: Research proposal
Our proposal shared with design and research partners included a description of UX metrics and benchmarking, a case study of a similar benchmarking project, goals for the study and estimated project timeline.
Phase 2: Drafting and launching the survey
We used a modified SUPR-Q, which allowed us to measure the employee experience on key topics such as usability, trust and appearance while also giving us the flexibility to include unique questions specific to the customer order fulfillment app.
We created two sets of metrics:
Set A asked employees to rate the overall experience of the app (modified SUPR-Q)
Set B asked employees to rate each function of the app
We also included two open-ended questions in the survey about overall experience and feedback on each function of the app.
Phase 3: Collaborative synthesis
My research colleague and I tackled the extensive analysis together in a shared Excel spreadsheet. It was essential to track how often similar comments were made to help reach our project goal of research-driven prioritization. However, open-ended comments often contained multiple themes, making quantification very complex.
Phase 4: Moving forward
The extensive results were shared with stakeholders in two different chunks to make it easier to digest.
The first featured general app feedback (SUPR-Q results) as well as the top three prioritized app functions identified in collaboration with the product team.
The second report contained findings for the rest of the app.
After the final share outs, my colleague and I:
sent a thank you memo to employees to show them we value their engagement and encourage participation in the future
created a spreadsheet of our new volunteer participant pool to track employee study participation
discovered quick wins by using findings to inform current design efforts before development
held an internal retrospective to discuss improvements and share approach with broader team
facilitated working sessions in Miro with stakeholders to apply insights and prioritize future research
This extensive report will be a go-to resource for information fulfillment work and follow-up benchmarking research going forward.