Solving a 50% Attrition Rate

With the California Department of Public Health, we led a research initiative to uncover why 50% of users were disengaging from a COVID-19 virtual assistant resource tool. Through in-depth interviews and data analysis, we identified emotional, behavioral, and trust-related barriers that were driving drop-off. Our findings informed actionable design recommendations that reshaped the user experience—restoring trust, increasing engagement, and helping more Californians take informed steps to protect their communities.

A GIF showing the first few steps in the CDPH Virtual Assistant COVID case investigation survey

Team Structure

2 Service Designers

Process

Interview Design and Facilitation

Qual and Quant Data Analysis

Product Recommendations

Problem

At the height of the pandemic, California's Virtual Assistant (VA) tool was meant to guide COVID-positive individuals through the next steps. But half of users weren’t making it past question four. A staggering 80% didn’t even start after clicking the SMS link. The tool meant to protect public health was being abandoned—and we needed to understand why.

What we delivered

We designed research-backed recommendations that transformed a confusing and underutilized COVID-19 survey into a more trustworthy, human, and emotionally supportive experience. By identifying key drop-off points and trust barriers, we laid the groundwork for increasing user engagement and completion rates—helping more Californians access crucial health guidance during a time of uncertainty.

This work directly informed the redesign of the CDPH Virtual Assistant, streamlining the experience and ensuring more people felt safe, understood, and motivated to participate—boosting public health outcomes at scale.

Research approach

Simulated real-life behaviors

We asked participants to imagine they just tested positive for COVID-19 and received the SMS survey. We designed the session to simulate real-world conditions as closely as possible—including the option to quit, just like they might in real life.

Data collection and analysis

We conducted qualitative interviews and triangulated our findings with quantitative drop-off data to understand friction points. Through affinity mapping and behavioral analysis, we surfaced common moments of confusion, mistrust, and overwhelm.

Responsive testing environment

We created an interactive prototype for users to interact with, allowing them to answer how they would in real life. We asked them to think-aloud to steer the interview instead of ridid questions.

Feature recommendations grounded in data

We synthesized interview and usage data to identify key patterns in trust, flow, tone, and timing. These insights shaped clear, actionable design directions—each tied to a specific user need uncovered through testing.

Interview guide

Introduction + Overview (and double checking) of consent

  1. Overview and time constraints of the interview, ensuring that 45 minutes was still feasible for them

  2. Setting the scene: Imagine you just tested positive for COVID-19, act as if you would in real life

  3. Provide them with an out - if they would actually quit the survey, we wanted them to!

  4. Based on their behavior and choices, we wanted a different set of questions to understand:

    1. Why they finished the survey, confirming that they actually would in real life (and weren’t satisficing)

    2. Why they chose to quit the survey, probing to see their reactions

  5. Exit questions to understand their emotions regarding the VA, if anything was confusing at any point, and of course, if they’d like to add any thoughts that we didn’t cover in the session

Interview participants

A majority of participants, 6 out of 9, identified as female, the remaining identified as male. 4 were from Southern California, the rest were from scattered parts of Mid-Northern California. All participants indicated that they take precautions to slow the spread of COVID-19, like wearing masks and limiting social interactions where applicable.

A GIF showing the first few steps in the CDPH Virtual Assistant COVID case investigation survey

Reoccuring themes

Survey Fatigue

Users didn’t know how long the survey would take and worried about time-consuming questions.

Trustworthiness

There was hesitation to click on the initial VA link with participants assuming it was a phishing scam; despite knowing its not, there was also hesitation around sharing contacts’ info without context or consent.

Emotional state

A COVID-positive result created anxiety; the tone and format of the VA didn’t account for this— rather it was perceived as brusque!

Disjointed experience

The survey felt disconnected from earlier testing touchpoints, reducing confidence and motivation.

Near drop-off moments

Moments of hesitancy mapped out in a table with quotes from various participants, such as: "I am hesitant to click on random links for security reasons"

Key quotes

“I am hesitant I am hesitant to click random links. I would need more information or a reminder of where this information is coming from.”

— participant two

“This feels like doing my taxes”

— participant eight

“I don't feel comfortable providing information about others without knowing how the information will be used. My contacts probably don't want me to either.”

— participant six

Insights and design recommendations

01 - Connected Journeys

Demonstrating that this is merely a continuation of the preceding testing touchpoint is key to ensuring the user will trust the SMS entry point and choose to engage with the virtual assistant. And in a world of nebulous SMS text message scams, early signs of authenticity are a must.

  • 5-digit SMS sender telephone number, URL ending in .gov, CDPH logo, and timestamps engender confidence in authenticity.

  • Lead with simple, specific references that call back previous information gathered during testing interaction (e.g., "we have test results" rather than "important health issues").

  • Give multiple signs for user to quickly identify text message legitimacy and verify communication is truly coming from CDPH (e.g., information can be cross-referenced and verified within the CDPH website).

  • Provide identical markers (i.e., icons, imagery, branding, vocabulary) to unite the SMS text message with the VA tool authentication page.

  • Express clarity about how user's data will be handled at the outset.

  • Bring justification for this tool specifically in contrast with alternative channels (i.e., Bluetooth contact tracing).

02 - Uneventful Transactions

A positive COVID-19 test result is inconvenient at best, terrifying at worst. A survey that feels brief, seamless, and shoulders the work on behalf of a user will ensure they don't experience any added undue burden.

  • The tool should be responsive, pre-populating known data when possible and acknowledging information already provided by the user elsewhere in the survey.

  • Users appreciate a notion of survey completeness; indications of total survey length and progress will encourage them to proceed.

  • It should be clear what happens if a user needs to exit the survey early and there should be mechanisms to save progress.

  • The distinction between optional and mandatory sections and fields in the survey should be clearer.

  • Option to "skip" sections should be visible, when possible.

  • The most tedious and labor-intensive portion of the survey came when users had to enter information for their contacts.

03 - We have you covered

The experience should be personable, informative, and emotionally supportive to help in a time of stress. The expectation of a COVID-19 test result can be anxiety provoking, but the right approach, voice, and tone can make a world of difference.

  • Emotional appeals following a positive result make the user feel that they aren't alone (e.g., ”we recognize this is difficult news...we will give you the information and next steps to take care of yourself and your loved ones...”)

  • Informal, conversational voice and tone is comforting and feels appropriate for the scenario.

  • Directing, simple and clear steps early on are indispensable for those that just need to be told what to do in a time of stress.

  • An "avatar" could help make the chatbot feel more personable and human.

  • Thorough context setting before personal or uncomfortable questions is appreciated (e.g., in advance of the employment question).

  • Incorporating additional resources (e.g., mental health information, questions answered) at the end of the survey, that can be accessed at any time, can be a source of comfort.

04 - Handle with care

Sharing information with the government is a sensitive matter. What will be gained and how information will be used are key concerns expressed by people. Sharing information about others without their consent requires a high level of clarity and trust.

  • Outlining early on which information won't be collected helps to create trust.

  • Provide clarity about how the government will contact other people, employers, and what details will be disclosed.

  • Understand that some people prefer to reach out on their own and some have already communicated the news to their contacts post exposure/pre-results.

  • Give cues for people to remember who they might have come into contact with and reassure them that there are no consequences if they can’t remember everyone.

  • Users would like to preview the communication and experience they're volunteering their contacts for before they do so.

05 - Right info, right time

Strike the right balance between succinct, necessary instruction and deeper, supplemental information. Most people want to be told quickly and clearly what to do following a positive result, but preferences vary when it comes to how they want to be told. People want to skip to the relevant information as soon as possible.

  • Brevity is paramount: condense instructional and contextual copy as much as possible to reduce reading fatigue.

  • Provide optional pathways for people to learn more about their COVID-19 result and how it impacts someone like them.

  • Users are looking for quick and commanding directives when faced with a positive COVID-19 test result. High level "TL;DR" ("too long; didn't read") takeaways will prioritize focus and help the user feel guided when they need timely instruction.

  • It is critical to iterate section ordering and overall flow to ensure similar prompts are positioned near each other or consolidated (e.g., "places" and "gatherings") to avoid question redundancy.

  • User should be able to save all information from the chatbot conversation for later review.

  • Updating the style of drop-downs to be more discoverable might help people understand the ever-changing nature of the virus and notice information to check for.

Impact

Our research didn’t just generate recommendations—it drove real product changes that empowered Californians to take safer, more informed action.

By reframing the entry point and aligning the tone with users’ emotional state, we helped the state reduce drop-off rates and improve survey completion. Clearer communication around purpose and length allowed users to make confident, informed decisions rather than abandoning the process out of confusion. Our insights shaped a more guided, intuitive experience that enabled self-service and reduced reliance on overwhelmed contact tracers. Most importantly, the tool became more human, trustworthy, and engaging—helping to rebuild public confidence in state health communication.

Takeaway

This project marked my first time leading a client-facing research engagement—an experience that challenged and stretched me in new ways. I learned how to balance structure with flexibility, guiding a research plan while staying open to unexpected insights. Collaborating directly with state stakeholders sharpened my ability to communicate findings with clarity and impact, and strengthened my confidence in facilitating alignment across cross-functional teams. It was a pivotal moment in my growth as a designer and researcher, and I carry forward the lessons of leadership, empathy, and adaptability into every project I take on next.