Improving survey feasibility

About the project

Exchange is a platform that connects market researchers with survey respondents worldwide. Researchers define detailed requirements—called profiling—such as age, gender, or profession. Some profiles are easy to fill (e.g., 10,000 women), while others are much harder (e.g., 10,000 neurosurgeons). This likelihood is called feasibility.

Feasibility helps researchers decide whether to adjust their setup, like loosening criteria or reducing sample size. The challenge: when profiles get complex, users struggle to see what affects feasibility and to understand how it’s shown in the UI.

My role in it

Led the end-to-end design process, including research, concept development, prototyping, UI design, and post-launch testing.

Why it mattered

Customers often contacted support to understand feasibility, which created a poor self-serve experience and wasted their time. It also put unnecessary strain on the support team.

How we did it

Before the redesign, feasibility was shown as a range (e.g. 70–80 out of 170 completes are feasible). If it wasn’t 100%, a warning icon appeared next to it. I set out to explore why this seemingly simple UI caused so much confusion for users.

Research

Methods

• Reviewed customer complaints and feedback
• Spoke with support to uncover recurring issues
• Interviewed two highly active customers

Research objective

To understand why customers struggle with feasibility—what makes it confusing, how it affects their ability to launch surveys, and what information or UI changes would help them act on it without relying on support.

Key findings

• Customers didn’t understand what caused low feasibility or how to fix it
• The red “!” icon was confusing—did low feasibility mean they couldn’t launch?
• Customers were unclear why feasibility is shown as a range

Problem statement

Researchers struggle to understand survey feasibility. They don’t know what drives low feasibility, how to fix it, or what the current UI indicators mean. This leads to confusion, poor self-serve experience, and frequent support requests.

Ideation

Define success

• At least 80% of tested users can accurately explain feasibility without help.
• At least 80% of tested users can identify one concrete action to increase feasibility.
• 30–40% fewer feasibility-related tickets within 3 months of launch.

Design problems

How can I show feasibility in the top bar so users can instantly assess it?

• Introduced a donut chart to show “part of the total,” replacing confusing raw numbers.
• Removed the warning icon, since it wasn’t an actual warning—just an information.
• The donut chart changes color (red → yellow → green) depending on feasibility level.
• Used the most likely number of completes instead of a range. While not exact, it gives a good enough high-level approximation.

How can I present feasibility in more detail so users understand it’s a range and how that translates into achievable completes?

I explored different charts to show how feasibility maps to completes.

After testing with internal teams (a good proxy for customers), I chose a histogram, which highlights the highest probability and makes the range easier to understand.

How can I guide users to improve low feasibility?

To help researchers improve feasibility:
• Suggested high-level actions users could take directly from a popover
• Surfaced potential issues with quotas and profiling, pointing users to where changes could help

Usability tests
• 10 out of 10 tested users could clearly explain feasibility after seeing the redesign 
• 9 out of 10 tested users successfully improved feasibility using the new UI 

What it achieved

The redesign made feasibility clearer, more actionable, and easier to trust. Researchers could instantly understand feasibility, see why it was low, and take steps to improve it directly in the UI. As a result, support tickets related to feasibility dropped by 32% post-launch!