My Role: UX Designer
About: Reimagining the launch test experience in the Feedback Loop self-serve platform
Team: Myself as a UX Designer + Product Manager, Engineers, UX Researcher
Time: 2 weeks, July 2021
Feedback Loop is an agile research platform for rapid consumer feedback that helps businesses collect data from real consumers to ensure a product is a hit. During this project, I, as a UX Designer, collaborated with a Product Manager, Developers and our UX Researcher to reimagine the launch/queue test experience while staying aligned with the entire flow of the test lifecycle from draft state → submitted OR in-progress → completed. In this portfolio piece, we will focus on the new ‘launch test’ flow and how the solutions to previously existing user pain points improved the entire user experience, leading to user satisfaction..
In the old version, when users had created a new test and were ready to submit it to start collecting responses, once they clicked on ‘Add to queue’ main button on the test creation page, users experienced friction and frustration. It was unclear to them what happens upon clicking on ‘Add to queue’, not knowing what that action results in, nor were given information on what they should do next once they had clicked on it. The only system feedback the user received was the update of the status labels under the test title field. The status pillar either changed to 'submitted' or 'in progress', but users didn't understand the difference between the statuses, the reason why tests change to 'in-progress' or 'submitted', and they neither understood what they should expect next.
These core problems made it impossible for most users to understand and track the happenings of their tests. Users also found it difficult to estimate when they would be able to see results - possibly missing some deadlines due to it.
We needed to take into consideration some technical requirements and development aspects. Whether a test launches automatically or is added to the queue once a user clicks on “Add to queue’, depends on the user’s org capacity:
Tests automatically complete maximum within 48 hours, sourcing efforts should end & results should be out for users to analyze.
Note: Org capacity means the number of available test spots to be launched in a timeframe. Each organization has a limited amount of tests available for them to launch and run at the same time, and once that capacity is reached, they can’t launch tests right away, and the tests have to wait in a “submitted section’ until a spot frees up among the ‘in-progress’ section
At first, since this was my first complex project working at FBL, I needed to gather as much information as possible to understand the core problems and propose the best solutions thoroughly. I set up calls with our UX Researcher to review a high-level competitor analysis with her to see how other platforms handle the ‘launch test’ experience. We also had a quick brainstorming session together where we concluded that the ‘Add to queue’ button copy itself is causing significant confusion in users as it’s not reflecting the actual user action and outcome.
The ‘Add to queue’ button copy is misleading as a test can either go to queue or be launched immediately, so the copy sets the wrong user expectations. Many users launched the test by mistake as they thought it would be ‘saved’ in the queue, so they have a chance to edit them before it launches. Therefore, users were frustrated when they realized that some tests launched right away as they couldn't 'undo' it. They didn’t have the option to recover from those mistakes as a ‘cancel test’ functionality doesn’t exist in the platform as of today still, so it was crucial to communicate the consequences of clicking on ‘Add to queue’ button.
Based on the specs and conversations I had with several people from the company and team, I started to map out a few potential flows and sketch out some initial design proposals. The goal was to provide valuable information to users upfront and during the experience as I walked users through the steps. It was essential to make users understand what would happen in the upcoming 48 hours to their test (the maximum amount of hours a test must end up ‘completed’.)
To understand better how a user would go through this experience, I created a user flow with key actions, decision points and pages involved.
The user flow helped me make design decisions when sketching my initial layouts. I thought a two-step modal flow could be ideal for addressing major pain points and the whole org capacity technical considerations. A first modal could have the purpose of informing users about what WILL HAPPEN to their test and what this action will result in if they proceed to launch or submitting to queue. A second modal could have the purpose of confirming what HAD HAPPENED to the test and providing the next steps or adding further information related to the current status of the test.
I prepared another design where the indication of whether a test can launch or will be added to the queue is visible upfront before the user clicks on the 'Launch test' button. I proposed this solution to the team because if they thought a two-step modal experience was potentially causing modal fatigue for users, this could be a good starting point to think about how we can eliminate the first modal and keep it the confirmation one only.
After discussing the options with the team, we decided that we will move forward with the two-step modal experience as it's easier to present crucial information for users within the modal. On the page the amount of information could have increased cognitive load for users.
We wanted to make sure that the flow and content makes sense for the end users, so we decided to run usability tests to measure the success and identify areas of improvement. I cleaned up the design and content and put together a basic prototype in InVision that I handed off to the UX Researcher to use during the usability tests with real users of the platform.
Our UX Researcher led the moderated 1:1 usability tests, but I closely collaborated with her the entire time and attended all the Zoom calls set up with participants to be able to ask follow-up questions if needed.
Decisions/changes made based on feedback:
Once I completed all the iterations of the design based on the usability feedback results, I applied the UI to the wireframes and handed off the design to the engineering team via Zeplin.
This project had complex problems, yet we could provide a simple solution. User flows and usability testing provided the most significant help during this project, leading to the success of the new launch / queueing test experience. Being able to test with users before shipping provided enormous value and allowed me to make last-minute changes that holistically improved the UX. The progress bar, the simplicity of the content in the modals and the entire flow is loved by users ever since it went live. We conducted an E2E usability test on the whole product in December 2021, and it once again proved how satisfied users were with this improvement made in the summer of 2021.