
UX research for a B2B wearable technology product, improving task success through behavioral insights.
Client
Fortune 500 Tech Company
My Role
UX Researcher
Timeline
9 Months
TL:DR
A Fortune 500 tech company had a powerful wearable product — but users kept failing the same tasks over and over. My job was to figure out why and turn that into something the product team could actually act on.

Hardware can be impressive and still feel impossible to use. That was the tension at the center of this project.
Users were hitting walls during core interactions: hand controls felt unintuitive, gesture-based inputs weren't landing, and navigating within an immersive environment tripped people up in ways that weren't obvious from the outside.
The Fortune 500 client needed to understand exactly where and why users were struggling, so they could make confident product decisions backed by real behavioral evidence rather than assumptions.

Over 9 months, I facilitated 4 research studies, including 120+ usability sessions, that produced a structured, actionable research roadmap used directly by the product and design teams.
The findings led to measurable improvements in task success rates, clearer hand control mapping, and a stronger foundation for future iterations of the product. The client walked away with behavioral evidence, not gut feelings, driving their next moves.

My day-to day included facilitating in-person VR usability sessions, making sure participants were comfortable, and capturing high-fidelity behavioral notes in real time while staying completely neutral.
I also supported quantitative data collection in Qualtrics and helped synthesize the findings into insights the product team could use.
One thing I got good at fast: flagging recurring patterns as they emerged during sessions, so by the time we got to synthesis, we weren't starting from zero.
We used a mixed-methods approach because the numbers told us where users failed, and the qualitative data told us why.
Methods included moderated usability testing, behavioral observation, task success and failure tracking, in-session surveys using Qualtrics (an online survey tool), and qualitative synthesis (analyzing open-ended feedback for patterns).
Participants ranged from teens to adults, all interacting with the headset in simulated real-world environments. That age range turned out to matter more than we expected; more on that below.
A few patterns kept showing up across sessions, pointing to the same underlying issue:
The product was asking users to move in ways their bodies didn't expect.
Users failed tasks most often because of unintuitive hand-control interactions—not because they didn't understand the concept, but because the system wasn't giving them clear enough feedback to know whether what they were doing was working.
Adults struggled significantly more than younger users to adapt to gesture-based controls, which suggested the learning curve wasn't just steep, it was steeper for certain groups in ways the product hadn't accounted for.
Small changes to control mapping and visual cues produced noticeable improvements in task success, which was a hopeful signal that the fixes didn't have to be massive to matter.
The through line: physical interaction design in immersive products is load-bearing. Get it wrong and nothing else about the experience can compensate.

What I Took Away
This was my first time working at this scale — 120+ sessions, enterprise stakeholders, a product category that most users had never touched before. It stretched my research skills in ways that a smaller study wouldn't have.
I came out of it with a much sharper instinct for how to observe without interfering, how to spot a pattern early enough to make it useful, and how to translate messy behavioral data into something a product team can actually build from.
Want to talk through the research?
Copyrights 2026 — All rights reserved
