Improved survey feasibility

About the project

Exchange is a platform that connects market researchers with survey respondents worldwide. Researchers define detailed requirements—called profiling—such as age, gender, or profession. Some profiles are easy to fill (e.g., 10,000 women), while others are much harder (e.g., 10,000 neurosurgeons). This likelihood is called feasibility.

Feasibility helps researchers decide whether to adjust their setup, like loosening criteria or reducing sample size. The challenge: when profiles get complex, users struggle to see what affects feasibility and to understand how it’s shown in the UI.

My role in it

Led the end-to-end design process, including research, concept development, prototyping, UI design, and post-launch testing.

Why it mattered

Customers often contacted support to understand feasibility, which created a poor self-serve experience and wasted their time. It also put unnecessary strain on the support team.

How we did it

Before the redesign, feasibility was shown as a range (e.g. 70–80 out of 170 completes are feasible). If it wasn’t 100%, a warning icon appeared next to it. I set out to explore why this seemingly simple UI caused so much confusion for users.

Research

Methods

• Reviewed customer complaints and feedback
• Spoke with support to uncover recurring issues
• Interviewed two highly active customers

Research objective

To understand why customers struggle with feasibility—what makes it confusing, how it affects their ability to launch surveys, and what information or UI changes would help them act on it without relying on support.

Key findings

• Customers didn’t understand what caused low feasibility or how to fix it.
• The yellow / red “!” icon was confusing—did low feasibility mean they couldn’t launch?
• Some customers didn't know what to expect from wide feasibility ranges—should they launch the survey or not?

Problem statement

Researchers struggled to understand what caused low feasibility, how to fix it, what the UI indicators meant, and what to expect from wide ranges. This led to confusion, poor self-serve experience, and frequent support requests.

Ideation

Define success

• At least 80% of tested users can accurately explain logic behind feasibility range.
• At least 80% of tested users can identify actions to increase feasibility to satisfactory level.
• 30–40% fewer feasibility-related tickets within 3 months of launch.

Design problems

I broke these goals down into more digestible guiding questions.

How can feasibility be instantly clear at a glance — without the confusion of ranges and warning icons?

• To fix the confusion around ranges, I switched to showing the most likely number of completes instead. It’s not exact, but it gives a clear high-level view.
• To remove the confusing error icon — and reduce the mental load of calculating “part of the total” — I introduced a donut chart to show this visually.
• And third, to help users quickly judge how good feasibility is, and get a warning if it’s too low, the donut chart changes color from red to yellow to green.

But showing just an approximation wasn’t enough. While it worked for conveying feasibility at a high level, the reality is that feasibility is always a range — and users still needed to understand what that meant for their completes.

How can I present feasibility in more detail so users understand it’s a range and how that translates into achievable completes?

I explored different charts to show how feasibility maps to completes.

I explored different charts to show how feasibility maps to completes. After testing with internal researchers — a good proxy for customers — I chose a probability distribution curve. It clearly explained the number in the top bar as the most likely completes, while also making the range easier to understand.

The last question I asked myself was…

How can I guide users to improve low feasibility?

To help researchers improve feasibility:
• Suggested high-level actions users could take directly from a popover
• Surfaced potential issues with quotas. Clicking on “45–54 year olds,” for instance, would take users to the quotas table, where a warning mark highlighted the issue.
• Added feasibility at the profile level — a lighter indicator compared to specific quotas — so users could still see feasibility when the accordion was folded.
• Added the completes goal in the quotas table, so users could easily compare feasibility against their desired target.

Usability tests
I needed to make sure users had the same mental model as I did. The tests were successful:
• 10 out of 10 tested users could clearly explain feasibility after seeing the redesign 
• 9 out of 10 tested users successfully improved feasibility using the new UI 

What it achieved

The redesign made feasibility clearer, more actionable, and easier to trust. Researchers could instantly understand feasibility, see why it was low, and take steps to improve it directly in the UI. As a result, support tickets related to feasibility dropped by 32% post-launch!