Case Studies

STRATEGIC PLANNING

Take a look at how I help teams align UX design and research as part of larger product development processes, roadmaps, and goals.

  • After getting settled in the role and conducting foundational and evaluative research on immediate design needs, I collected my observations of the current research process and the expectations of research from the Product team at large.

    I determined that the research maturity of the design team was ad hoc (unplanned), based on validating pre-created solutions, and centralized to the ownership of a single person: The Researcher.

    Considering the long-term goals of the product roadmap, staffing budgets, and the backlog of needed foundational research, I presented the VP of Design and Director of Product Design with research goals that were better aligned to the established product development lifecycle.

    The Lessonly Research mission was to be:
    (1) Strategic - Research throughout development cycle, prioritized needs against the product roadmap, proactive research
    (2) Informative - Research is based on user workflows to enhance designer autonomy in solution creation; insights are easily found and applied
    (3) Democratized - Non-researchers are able to participate in designing and executing research studies with various levels of assistance; the research process is conversational in tone.

    These goals were built to be flexible and scalable. Short term goals could be layered under each with tangible measurements. The mission would also be achievable whether the company had one Product Researcher or 30.

  • Research methodology was not limited in any way by this Research Practice.

    Depending on context, described in the next section, methodologies used were:
    - Usability Testing
    - Contextual Inquiry
    - Surveys
    - Concept Testing
    - Card Sorting
    - Benchmark Testing

    *Testing methods included both moderated and unmoderated forms

  • (1) Strategic - To plan strategic research, I worked closely with the VP of Product and the Product Management team to identify upcoming design priorities. Based on size, impact, and broader business goals, we identified 4 core studies to focus on over the coming year. These focused on discovery research to build foundational research prior to scheduled design and development cycle. This research would be solely handled by the Product Researcher.

    (2) Informative - I developed templates to build research study plans, moderator’s guides, analysis framework, and final findings framework. I used Dovetail to host these templates and provided the Product Team with logins. The studies were organized by User Workflow. They were accessible to all throughout the course of the studies.

    (3) Democratized - The research study templates were available to the Product Team to encourage Product Designers and Product Managers to initiate their own research. As they had all other active and prior research available to them in Dovetail, Product Designers were encouraged to review relevant findings across studies.

  • I worked with the Customer Success and Sales Teams to build a database of the contacts for each company account. We stratified accounts by company size, industry, use case, and user count. Certain companies were tagged as Do Not Contact, but all other accounts were available for research contact unless they opted out.

    As research studies were created, I would access the database and select accounts that fit the research need at random. I would also receive recommendations from CS about accounts that would be interested in the specific research study.

    The owner of the research study would then contact accounts via email to recruit and schedule user participants from each (not necessarily the point of contact). Testing was always completed remotely via Zoom and the relevant prototype or Optimal Workshop. We staffed each study with a Moderator and a Note-Taker. Any observers were also required to take notes.

  • Synthesis was performed remotely via Invision App with all team members involved in the research. If not part of the research, I would often participate as the session host and moderator. Group synthesis was completed using the Atomic UX Research framework. This framework identifies the observed facts from the research data, combines them to build insights, and finally identifies opportunities for change or enhancement.


    Non-researchers were encouraged to use the framework provided to them. Product Designers with prior research experience were free to use analysis methods they felt most comfortable with.

  • Results of synthesis sessions were recorded in a What, So What, Now What format in the relevant Dovetail project. Results were formally presented by the study owner if relevant to the larger Product Team. Otherwise, the studies were given final review by Research and remained searchable in Dovetail.

  • As it was an emerging practice in the company, we anticipated a lot of bumps in the road.

    Some Product Designers felt pressured to expand their job roles into a field they were less comfortable with, others felt far more independent. To curb this, I worked alongside those teams to conduct the portions of the research that they were uncomfortable with. Often this was moderating usability tests themselves, which was also the most time consuming commitment leading to a bigger time deficit in my role. This was balanced by prioritizing unmoderated usability testing methods and the eventual addition of a Jr. Product Researcher.

    Consistency in Dovetail tagging systems was fraught. While it was easy to find relevant project findings in the early period, it was becoming increasingly clear that more standards were needed to be put in place. However, with resource organization (yay) comes bureaucracy (boo, hiss). The small number of designers, however, meant that there were less study authors than there were people involved with a given study. This provided some consistency across studies.

    I leaned into these observations by:
    (1) Regularly meeting with more involved Dovetail users to discuss tone, clarity, and brevity in research reports
    (2) Allowing the tags to balloon in number. Eventually we would find trends in the nomenclature that we could use to clean up the projects on a rainy day. Maybe. It really wasn’t a major obstacle to completing research, just slowly deteriorating search functionality in Dovetail.

    The biggest thing I took away from this experience was that Product Research is not a role for one person. It requires more than just buy in from the teams directly involved and impacted by the research: it requires hands on work. It is daunting to expect so much from individual roles over even a moderately sized SaaS product.

    Despite the tech industry’s mantra of work fast fail fast, I’d suggest work slow, fail fast. Product Research Strategy is defined by its shaping of insights like clay on a pottery wheel. Spinning the wheel too fast will tear a hole in the clay, but quick hands direct the final form.


CASE STUDY

How to Research & Design a Mobile App in 10 Weeks

United Airlines
2019

RUNNING WITH REQUIREMENTS

I help designers and PM’s translate baseline requests into design requirements that facilitate communication between all stakeholders.

United Airlines needed a secure way for their employees to ensure the safety of passengers under 15 traveling without an adult. And they needed it fast. A mock-up had been created, but the development and product teams were unaware of the requirements placed on employees and how they may react to using a mobile app as part of their work for the first time.

I worked alongside a Product Designer, Product Manager, and various stakeholders across the organization representing development, technology, Flight Attendants, Gate Agents and Customer Service Reps, among other departments.

This project was an urgent request direct from the CEO.

  • Relevant stakeholders represented the employee roles affected by this process.

    Day 1. With stakeholders, I led a group cognitive walkthrough from the perspective of the passenger from the time they arrive at the departure airport with one parent or guardian and are greeted at the destination gate by another guardian. At each point the passenger met an airline employee, stakeholders voiced what the employee was responsible for and how they would execute on those tasks.

    We then took this information and applied it to the current design mock-up. We confirmed available technology, the data available to us, and the data available to the employee users.

    These assumptions highlighted gaps in knowledge about the requirements of each employee user group and between groups. Our research goals were then to:
    (1) validate and map the end-to-end passenger experience with relevant employee interactions
    (2) test the concepts present in the mock-up
    (3) validate employee responsibilities for passengers traveling under an unaccompanied minor ticket
    (4) test and deliver a final mobile design that is easy for first time users to follow in the moment, without prior training

  • Week 1. The Product Manager and several stakeholders agreed to visit various US airports to confirm the cognitive walkthrough from the day prior and speak directly to employees. I created a list of questions to ask from the various employee groups.

    In the meantime, the Product Designer, PM, and I brainstormed design workflows (first independently, then together) based on the current mock-up and assumptions. We agreed on a general workflow for a mobile app

    We traveled to the airports to perform our guerrilla research. The PD, PM, and I each took print outs of the mock-up to run ad hoc concept testing with employees at the front desk, gates, and flight attendants lounge.

    At the end of the two travel days, I led the group discussion to share and compare our findings. We refined our assumptions based on employee feedback. The Product Designer and I mapped out the physical and digital workflows and touchpoints we identified as a skeleton for the first design iteration.

  • Given the 10-week timeline and the size of the workflow we identified, I felt confident that we could complete 2 rounds of usability testing with each employee user group. Two rounds would allow us the opportunity to iterate from the concept testing, refine our new design after the first usability test, and validate our final design after the second.

    Being aware that employees were not currently using mobile devices as part of their job, I felt it was important to make sure the testing felt as realistic as possible. A visually refined and polished prototype would provide us with more information on how employees are able to navigate this kind of infrequent workflow for the first time.

    Before testing, I prepared a timeline that accounted for:
    (1) first iteration, pre-testing design and research plan review for stakeholders
    (2) presentation of research findings and planned design changes after each round of testing to stakeholders
    (3) final hand off presentation to stakeholders
    (4) work days dedicated to design, research prep, on-site testing, synthesis, design recommendations, or review

    This resulted in a 3-week timeline for each round of usability testing. The remaining three weeks (given that we were roughly over a week in already) were left for final design delivery and timeline padding. Not only did we expect some stakeholder discussions may change our course, but we also had other projects to work on at the time.

  • For Gate Agents and front desk CSR roles, we were able to freely travel to any US airport serviced by United Airlines and test ad hoc with employees on the clock.

    In these cases, testing was on the job site and voluntary. A mod guide was followed and a mobile device loaded with the prototype was used alongside artifacts that would be carried by the passenger. Sessions were not recorded. The Product Designer or Product Manager took notes as I led the study.

    Because GA and CSR roles are expected to handle the job responsibilities of the other, they were tested on the tasks required by each. Tasks included:
    (1) Check-in w Guardian verification
    (2) Departure Gate
    (3) Arrival Gate w Guardian verification

    Task order was randomized per participant to prevent priming.

    For Flight Attendants, we were only granted access to test in a booked conference room located in the Flight Attendant Lounge at ORD. Testing was off the job site (as this would be used at the time of boarding the aircraft, which we were unable to do) and voluntary. Sessions were recorded and followed a FA specific mod guide due to the different role requirements. The Product Designer or Product Manager took notes as I led the study.

    For each round of testing, we traveled to 2 airports (one hub, one smaller station) and aimed to test at least 3 GA/CSR participants at each for a total of 6 GA/CSR participants per round. We found we were able to get 8-10 total per round.

    For Flight Attendants at ORD, we recorded 4 participants per round. I felt this number was sufficient to provide patterns unique to flight attendants with the consideration that it was more difficult from experience to get enough time with FA participants. The workflow was similar enough to GA/CSR roles that we felt confident the usability would be supported by the study as a whole.

  • Observer and Note-Taker notes were compared between relevant roles for each round of testing before being looked at as a whole. I first analyzed the usability-focused tasks based on ease for each participant to complete. I then noted their behaviors and described expectations.

    Synthesis was completed through group synthesis alongside the Product Designer and Product Manager. I followed the Atomic UX Research framework to structure the sessions. We started by reviewing the raw notes and my initial analysis to determine facts per participant, draw insights across the participants, and develop opportunities for our next design. Finally, we recorded these session outputs across participants per task and question highlighting the most impactful (whether positive or negative).

  • After the synthesis sessions, I pulled our findings into the What, So What, Now What presentation structure and indicated impact priority to the workflow. For each presentation to the stakeholder group, I started with a review of the tested design, described the test objectives, and other test information. I presented the positive findings, the most impactful findings, other notable findings, and described the changes we intended on making for the next design.

    Stakeholders were given the presentation shortly before the meeting. Time was allocated for questions and feedback at the end of each meeting. Relevant feedback to the design was incorporated to the next design session. In some cases, changes were later communicated and incorporated as required.

    The final presentation included recommended considerations for future iterations of the app and the not-yet-planned customer-facing app.

  • We were able to complete the final app designs by week 9. Small changes were made during week 10 due to additional technical requirements from engineering.

    The app was released globally to United-issued mobile devices approximately 10 weeks later. In the first 2 weeks, 64% of unaccompanied minor tickets were activated in the app. Another 53% of these were completed through the destination airport. Considering that mobile devices had not yet been issued to all airports, we found these numbers encouraging. We were also able to get live feedback from employees after release with overall glowing results.

    In spite of the strict timelines, we were able to deliver a thoroughly researched and tested final product. By injecting observational techniques and contextual inquiry throughout the cyclical research and design process, we were able to validate needs and concerns prior to stakeholder review. This enabled us to anticipate stakeholder feedback while learning about a job process we had not previously researched or known about.

    The biggest challenge was the lack of familiarity on research processes from corporate stakeholders. Initial concerns were that it would slow down the design of a critically needed process. I used these concerns to strengthen the need for research by ensuring the right design is released first in order to set the tone for using a mobile device on the job for the first time.

    Additionally, I provided stakeholders with transparency over the entirety of the engagement. Stakeholders were able to review mod guides, prototypes, and test plans at any time as they were hosted on the UX team Confluence.

    On the plus side, we received a shout out from the CEO for our work.

PEOPLE WHO CAN DO RESEARCH

I help teams build confidence in implementing research to design and product processes. This especially benefits designers who have felt iced out of the process or product managers who are new to the industry.

ATOMIC SYNTHESIS FOR GROUPS

Keeping team members on the same page can be difficult when it comes to fact-based decision-making, but in design consulting it’s a crucial component to showing a client how those insights came to be.