Improving UX of launching / queuing a test

    Self-serve B2B SaaS | feature redesign

    My Role: UX Designer

    About: Reimagining the launch test experience in the Feedback Loop self-serve platform

    Team: Myself as a UX Designer + Product Manager, Engineers, UX Researcher

    Time: 2 weeks, July 2021

    About the project

    Background

    Feedback Loop is an agile research platform for rapid consumer feedback that helps businesses collect data from real consumers to ensure a product is a hit. During this project, I, as a UX Designer, collaborated with a Product Manager, Developers and our UX Researcher to reimagine the launch/queue test experience while staying aligned with the entire flow of the test lifecycle from draft state → submitted OR in-progress → completed. In this portfolio piece, we will focus on the new ‘launch test’ flow and how the solutions to previously existing user pain points improved the entire user experience, leading to user satisfaction..

    The Challenge

    • To tackle my first complex project with confidence on week 2 at Feedback Loop
    • To reimagine the whole flow of launching a test
    • To find the missing pieces of the puzzle
    • To set the right expectations for users and help them feel informed to increase user satisfaction
    • To look at the big picture while working through details

    01.Discovery, project kick-off

    OLD DESIGN & FLOW

    In the old version, when users had created a new test and were ready to submit it to start collecting responses, once they clicked on ‘Add to queue’ main button on the test creation page, users experienced friction and frustration. It was unclear to them what happens upon clicking on ‘Add to queue’, not knowing what that action results in, nor were given information on what they should do next once they had clicked on it. The only system feedback the user received was the update of the status labels under the test title field. The status pillar either changed to 'submitted' or 'in progress', but users didn't understand the difference between the statuses, the reason why tests change to 'in-progress' or 'submitted', and they neither understood what they should expect next.

    MAIN USER PAIN POINTS

    • before clicking on ‘Add to queue’, users had no idea whether the test was going to launch (draft → in-progress ) or be added to a queue ( draft → submitted ) where tests were waiting to be launched. It always caught them by surprise as no warnings or context were provided for them beforehand
    • not knowing what they should do next after a test had launched, and neither knew what to do once it was submitted to the queue
    • not knowing when a test in queue would be launched (move from submitted → in-progress )
    • not knowing when a test would complete once it was launched ( in-progress → completed )

    These core problems made it impossible for most users to understand and track the happenings of their tests. Users also found it difficult to estimate when they would be able to see results - possibly missing some deadlines due to it.

    TECHNICAL BACKGROUND

    We needed to take into consideration some technical requirements and development aspects. Whether a test launches automatically or is added to the queue once a user clicks on “Add to queue’, depends on the user’s org capacity:

    • if a workspace has an open capacity then the test would automatically launch and the status of the test would be updated: draft → in-progress
    • iif a workspace does not have an open capacity, then the test would not be able to launch immediately and would be added to a queue where it will wait until there is an open capacity for it to launch at some point (draft → submitted)

    Tests automatically complete maximum within 48 hours, sourcing efforts should end & results should be out for users to analyze.

    Note: Org capacity means the number of available test spots to be launched in a timeframe. Each organization has a limited amount of tests available for them to launch and run at the same time, and once that capacity is reached, they can’t launch tests right away, and the tests have to wait in a “submitted section’ until a spot frees up among the ‘in-progress’ section

    MVP SUCCESS CRITERIA

    • After filling out the required field on tests draft page, users know exactly how to [launch test ] or [ add test to queue ]
    • After clicking [CTA], user know what will happen next
    • After completing experience, users should feel informed

    02.Ideating, flows, sketching

    IDEATING

    At first, since this was my first complex project working at FBL, I needed to gather as much information as possible to understand the core problems and propose the best solutions thoroughly. I set up calls with our UX Researcher to review a high-level competitor analysis with her to see how other platforms handle the ‘launch test’ experience. We also had a quick brainstorming session together where we concluded that the ‘Add to queue’ button copy itself is causing significant confusion in users as it’s not reflecting the actual user action and outcome.

    The ‘Add to queue’ button copy is misleading as a test can either go to queue or be launched immediately, so the copy sets the wrong user expectations. Many users launched the test by mistake as they thought it would be ‘saved’ in the queue, so they have a chance to edit them before it launches. Therefore, users were frustrated when they realized that some tests launched right away as they couldn't 'undo' it. They didn’t have the option to recover from those mistakes as a ‘cancel test’ functionality doesn’t exist in the platform as of today still, so it was crucial to communicate the consequences of clicking on ‘Add to queue’ button.

    FLOWS + SKETCHES

    Based on the specs and conversations I had with several people from the company and team, I started to map out a few potential flows and sketch out some initial design proposals. The goal was to provide valuable information to users upfront and during the experience as I walked users through the steps. It was essential to make users understand what would happen in the upcoming 48 hours to their test (the maximum amount of hours a test must end up ‘completed’.)

    To understand better how a user would go through this experience, I created a user flow with key actions, decision points and pages involved.

      The user flow helped me make design decisions when sketching my initial layouts. I thought a two-step modal flow could be ideal for addressing major pain points and the whole org capacity technical considerations. A first modal could have the purpose of informing users about what WILL HAPPEN to their test and what this action will result in if they proceed to launch or submitting to queue. A second modal could have the purpose of confirming what HAD HAPPENED to the test and providing the next steps or adding further information related to the current status of the test.

      I prepared another design where the indication of whether a test can launch or will be added to the queue is visible upfront before the user clicks on the 'Launch test' button. I proposed this solution to the team because if they thought a two-step modal experience was potentially causing modal fatigue for users, this could be a good starting point to think about how we can eliminate the first modal and keep it the confirmation one only.

      After discussing the options with the team, we decided that we will move forward with the two-step modal experience as it's easier to present crucial information for users within the modal. On the page the amount of information could have increased cognitive load for users.

      SOLUTIONS I PROPOSED TO THE TEAM

      • Rename ‘Add to queue’ button to “Launch test” to better reflect user action and to set the right expectations
      • Visually indicate how a test status changes: draft → in progress, draft → submitted, draft → in progress, in-progress → completed
      • Pop-up modal flow to help users focus and keep their attention to consume crucial information regarding their test
      • First modal: purpose of informing users about what WILL HAPPEN to their test | Second modal: confirming what HAD HAPPENED to the test and providing next steps
      • To communicate with a banner why a test can launch right away OR why it can only be added to the queue (org capacity indication)
      • To present content in an easily scannable way so users don’t feel burdened by the amount of information - using bulletpoints to highlight key steps
      • Add Estimated turnaround time + position in queue sections

    03.Prototype & Usability Test

    MID-FIDELITY WIREFRAMES FOR PROTOTYPE

    We wanted to make sure that the flow and content makes sense for the end users, so we decided to run usability tests to measure the success and identify areas of improvement. I cleaned up the design and content and put together a basic prototype in InVision that I handed off to the UX Researcher to use during the usability tests with real users of the platform.

    USABILITY TEST & RESULTS

    Our UX Researcher led the moderated 1:1 usability tests, but I closely collaborated with her the entire time and attended all the Zoom calls set up with participants to be able to ask follow-up questions if needed.

    Test Objectives:

    • To measure how well users understand what is happening to their test
    • To discover any pain points regarding the pop-up modal flow
    • To measure the overall satisfaction of users around the new eperience
    • To identify areas of improvement around the copy of modals

    Insights:

    • overall users clearly understood the flow and context around 'Launch now' and 'Submit to queue' CTAs in the modal experience, setting correct expectations
    • the word 'capacity' led to confusion for many users - they didn't understand what it means
    • users expected to see the estimated time information in the first modal before they had launched the test
    • the phrase "Position in queue: 4" confused users as they didn't understand whether it means there are 4 tests ahead of this tests or the test is in the 4th in line in the queue
    • users expected to see the entire lifecycle of the tests to understand what the current status exaclty means for them and how far they are from reaching "completed" status

    Decisions/changes made based on feedback:

    • update language of banners to avoid using the word "capacity" → “Your workspace has the ability to launch [#] tests at a time, therefore this test will be added to your queue.” We did not want to go into details about concurrency to avoid any user confusion
    • rephrase "position in queue:4" to “# of tests ahead in the queue” to clarify the position of the test.
    • add a "View in queue" main cta in the second "Submitted to queue" modal. Users expected to be able to check their test in the queue on the workspace page so we can easily guide them there.
    • add a progress tracker bar that indicates the whole test lifecycle (draft → submitted → launched → completed) that shows where their test is currently at
    • simplify content to make it even easier to scan with the eye + polish up the UI

    ITERATIONS + HIGH-FIDELITY WIREFRAMES

    Once I completed all the iterations of the design based on the usability feedback results, I applied the UI to the wireframes and handed off the design to the engineering team via Zeplin.

    Reflection

    This project had complex problems, yet we could provide a simple solution. User flows and usability testing provided the most significant help during this project, leading to the success of the new launch / queueing test experience. Being able to test with users before shipping provided enormous value and allowed me to make last-minute changes that holistically improved the UX. The progress bar, the simplicity of the content in the modals and the entire flow is loved by users ever since it went live. We conducted an E2E usability test on the whole product in December 2021, and it once again proved how satisfied users were with this improvement made in the summer of 2021.

    Next Steps

    • Possibly rethink the two-step modal flow once onboarding is implemented in the self-serve platform to avoid redundant content
    • Future usability tests to uncover further pain points

Copyright © 2016-2022 szanilee.com.
Website Design & Development by Szani Lee.