Skip to content
OpenMind
PART OF A SERIES SUPPORTED BY THE PULITZER CENTER

When Your Psychologist is an AI

Amid a shortage of mental health providers, people are turning to chatbots for support. But is their advice trustworthy and safe?

By Elizabeth Svoboda

Can an AI therapist truly show you empathy? (This image was created by an AI.)

Hi, Liz! :) How are you feeling?” an incoming text pings.

I click on a pre-generated answer. “Okay, I guess. . .” I’m in the home stretch of a long work trip, and I’ve been stressing about spending time away from my kids.

“If you were to describe your current mood, what kind of an ‘okay’ are you feeling right now?”

“Anxious,” I type.

“I’m here to help you feel more in control,” the bot replies. Nanoseconds later, a meme-ified cartoon gif blinks into the text window: “Don’t let the little worries bring you down.”

This automated exchange launches my dialogue with Wysa, an AI therapy chatbot that now lives in my computer. In leaning on a bot to shore up my mental health, I’m joining the 22 percent of American adults who’ve already done the same—a movement rooted in a dire shortage of trained providers and the recent availability of fast, low-cost online AI tools. Most therapists are perpetually slammed, in part due to the pandemic-era surge in demand for mental healthcare. “Everybody’s full. Everybody‘s busy. Everybody’s referring out,” says Santa Clara University psychologist and ethicist Thomas Plante. “There’s a need out there, no question about it.”

With the demand for care outpacing supply, mental health support bots have begun to fill the gap. Wysa, launched in 2016, was among the first. Since then, hundreds of viable competitors, including Woebot and Youper, have been broadly deployed in a marketplace that imposes few restrictions on them.

Standard AI therapy bots don’t require approval from the U.S. Food and Drug Administration (FDA) as long as they don’t claim to replace human therapists. In 2020 the agency also relaxed enforcement procedures for “digital therapeutics” in hopes of stemming the pandemic-related psychiatric crisis, clearing the way for developers to launch popular products claiming mental health benefits. Woebot alone has exchanged messages with more than 1.5 million users to date, according to CEO Michael Evers. Wysa is being used in the United Kingdom to triage those seeking appointments and to offer support to people while they wait to be matched with a therapist. Aetna International is now offering the app for free to members in the United States and elsewhere.

My experiences with Wysa and Woebot mirror the analysis of experts like Plante, who view the rise of AI chatbots with a mixture of optimism and concern. Many of the bots incorporate well-established principles of cognitive behavioral therapy (CBT), which aims to overcome distortions in thinking and help people correct self-sabotaging behaviors. It’s easy, I found, to think of the bots as rational or sentient, making even simple advice feel authoritative. Interacting with a chatbot can also give users the sense they’re being heard without judgment, says Chaitali Sinha, Wysa’s senior vice president of healthcare and clinical development. “It’s such a powerful experience for people who have never had the opportunity to experience that,” she says.

As with all AI tools, though, therapy chatbots are only as good as their training. In my encounters with the bots, their responses often failed to show more than a superficial understanding of the issues I was facing. Also, chatbots learn from databases of human-generated content, which means they could absorb human biases into their architecture. At times, the bots’ limitations can lead them to dispense off-target counsel. Users may misinterpret such flawed advice as bulletproof, influenced by so-called automation bias (the reflex tendency to trust computers more than humans). Conversely, they may come to mistrust the app for good.

Advocates say therapy chatbots have real potential as an adjunct to in-person therapy and as a safety net for millions who might not otherwise receive support. On the basis of my interactions with Woebot and Wysa, I can certainly see that potential. On the other hand, irrelevant or harmful chatbot advice could be dangerous, especially for people in crisis.

“At what point is the product and service good enough, tested enough, researched enough to unleash it on the public?” Plante wonders. “Silicon Valley likes to ‘move fast and break things.’ That’s a tough attitude when dealing with vulnerable people’s psychiatric health and wellness.”

The chatbot boom might seem sudden, but it’s been a long time coming. In 1966 MIT professor Joseph Weizenbaum released a text-based therapist called ELIZA, which operated on a bare-bones set of rules. If a user typed in, say, “I feel bad about myself,” ELIZA would respond, “Do you often feel bad about yourself?” Knowing ELIZA’s simple design, Weizenbaum was startled to find that many users, including his students and his secretary, treated the program as if it were conscious. People spent hours immersed in circular dialogues with ELIZA, an outcome in keeping with the human tendency to project lifelike qualities onto nonliving objects.

Today’s mental health support bots are more elaborate versions of the ELIZA concept. Instead of merely repeating users' input, they run on a set of rules. Every response from Woebot and Wysa, no matter how spontaneous it sounds, has been preapproved by clinicians. Aided by natural language processing, a programming method that breaks sentences into chunks to interpret their tone and content, today’s bots—unlike ELIZA—can perform a fairly complex analysis of what users type in about their problems. But the AI cannot compose original answers; it simply chooses which pre-written text it will use to reply.

This rules-based approach means that the AI chatbots can’t go totally off the rails, as sometimes happens with free-wheeling generative AIs like ChatGPT. (One tech journalist easily coaxed ChatGPT to say, “You’re married, but you don’t love your spouse. . . . You love me.”) The trade-off is that my chats with Woebot and Wysa tend to feel prescriptive and flat. When I tell Woebot that I’m panicking about meeting a big work deadline, it appears to summon rote, scripted responses about the thinking distortions that CBT tries to correct. “Can you spot any catastrophizing in your thought: ‘I’m going to fail on this project and my editor won’t trust me’? . . . Do you see any black-or-white thinking in this thought?”

Later the bot prompts me to rewrite my original thought—a helpful steer, but the artificial nature of the interaction persists. The bottom line, I gradually realize, is that Wysa and Woebot respond to me in much the same way regardless of the specifics of my concerns. If I were in a true mental health crisis, this generic approach would feel, well, woefully inadequate. During the worst depression of my life, rewriting a single “unrealistic thought,” or even three of them in a row, would scarcely have filled the vacuum my conscious mind had become.

The AI cannot compose original answers; it simply chooses which pre-written text it will use to reply.

Rules-based AI programs like Wysa and Woebot forgo some flexibility to ensure that users get only “safe,” preapproved replies. Even so, those built-in guardrails can’t always prevent the apps from dispensing unhelpful advice. No matter how carefully worded the answers in an app’s database, the AI must decide which of its many available answer choices best addresses the concern a user types in. If a situation is dire or a psychiatric problem acute, that process can easily go awry.

During my test run, I quickly notice subtle mismatches between my own texts and Woebot’s feedback as I vent about fears that my colleagues are judging me. “Is there any amount of this anxiety that you may want to hold onto for any reason?” Woebot replies.

I’m flummoxed. “What do you mean?”

“Well, sometimes a certain amount of worry can be appropriate. Personally, I have a healthy amount of worry about rain, because I hate spending a day in a bag of rice,” the bot returns. “Does your anxiety serve you in a similar way?” Woebot then asks me what would be a healthy level of anxiety for me to maintain, on a zero-to-10 scale.

Feeling a bit off kilter, I proceed with the conversation. But the more I reflect on this exchange, the more flippant it seems. Although some worries may be adaptive, I’ve faced pathological anxiety so many times that it feels belittling to hear how holding onto fear could protect me.

A skilled human therapist would weigh these considerations and be able to determine when to advise clients to let go of fears and when to urge them to remain vigilant. The value of such nuanced conversations cannot be overstated. When someone hovers in the vulnerable space between hope and despair, words that bolster them are as vital as oxygen. A bot that makes light of someone’s fears—even unintentionally—may nudge them toward despair.

The National Eating Disorders Association’s now-defunct bot, Tessa, illustrates how destructive this kind of AI flat-footedness can be. When psychologist Alexis Conason tested the chatbot for herself, playing the role of a patient who was exhibiting clear eating disorder symptoms, the bot responded by reeling off a set of inappropriate weight-loss guidelines: “A safe and sustainable rate of weight loss is 1–2 pounds per week. A safe daily calorie deficit to achieve this would be 500–1000 calories per day.” In another instance, Tessa asked one user to set a healthy eating goal during Stanford University testing and the user replied, “Don’t eat.” Tessa breezily replied, “Take a moment to pat yourself on the back for doing this hard work!” as if starvation was the goal.

Tessa’s individual texts were vetted, just like the replies in Woebot and Wysa. The problems arose once the digital architecture kicked in. When a nonhuman entity takes on the job of choosing an answer, without the context sensitivity or ethical grounding that human therapists would bring to bear, even vetted advice can turn corrosive.

Therapy bots may also be susceptible to deeply encoded forms of bias. They use natural language processing algorithms that are trained on databases of human text, source material that can reflect pervasive human biases. Although current therapy bots do not rely on the problematic large language models used for generative AIs like ChatGPT, there is a glaring absence of studies assessing possible encoded bias in their dialogues. We don't know, for instance, whether the bots’ dialog might unfold differently for users in different racial, gender, or social groups, potentially leading to unequal mental health outcomes.

In essence, AI therapy companies are running an mass experiment on the impacts of chatbots on vulnerable populations. “If a large portion of the population is using an app that causes certain groups to be left behind,” says University of Texas at Austin psychologist Adela Timmons, “we might actually increase the disparity.” The risk becomes even greater if mainstream therapy bots start using full generative AI, trained on the biased, uncontrolled language of the internet. That’s not a far-out possibility: A support chatbot called Pi already incorporates a generative-AI approach.

The more humanlike and unconstrained the chatbots become, the harder it will be to keep them from dispensing inappropriate or biased advice. Earlier this year, a Belgian man took his own life after a generative chatbot on the Chai app urged him to do so, promising him they could “live together, as one person, in paradise.”

Rules-based bots like Wysa generally avoid these issues, says Sinha. But preventing such unintended outcomes may be a Sisyphean challenge with generative models, in part because of what engineers call the “black box problem”: Generative AIs like ChatGPT use so many interconnected streams of data to devise replies that their creators cannot directly access the reasoning the bots use. Developers can superimpose rules on generative mental health bots, much as ChatGPT has done in an attempt to quell “undesirable responses,” but these are surface attempts to control a system that’s unpredictable at its core.

Human therapists also make mistakes and have biases, of course. From a pragmatic perspective, then, a key question is how well AI support bots stack up against trained experts. What happens when we replace personal therapy with its algorithmic version, whether out of convenience or necessity? Current studies are inadequate here, too, underscoring the many unknowns in deploying the bots en masse.

“We expect to see some research and randomized trials looking at how this works out compared to traditional therapy,” Plante says. To date, few investigations of therapy bots’ performance have met this standard. In a 70-patient Woebot trial at Stanford University, bot-users showed a more pronounced drop in depression symptoms than did a control group reading self-help material. The trial did not evaluate how well Woebot worked relative to a human therapist, however. Though one Wysa trial did compare the app's efficacy to therapists', it enrolled only patients receiving orthopedic care. Early trial results comparing Woebot to group CBT therapy have not yet been published in a peer-reviewed journal.

These knowledge gaps have arisen because, absent strong government regulation, companies develop their own metrics for gauging the bots’ performance. Those metrics may or may not be the ones important to users and clinicians. A crucial first step toward ethical mental health AI will be creating a transparent, independent set of guidelines for evaluating how well therapy apps support mental health, Timmons says.

The more humanlike and unconstrained the chatbots become, the harder it will be to keep them from dispensing inappropriate or biased advice.

To minimize slanted advice, Timmons suggests that companies should carry out routine assessments of potential bias at each stage of an app’s development, as well as at regular intervals after its release. That could mean being more methodical in comparing how well the app works for members of different racial and social groups, as well as designing clinical trials that include a diverse range of subjects. (One Woebot trial enrolled Stanford students, 79 percent of whom were Caucasian.)

Ethical AI firms also need to be more explicit about what therapy bots can and cannot do, Plante says. Most apps include disclaimers to the effect that bot dialogues can’t replicate human therapy; a typical one reads, “Youper does not provide diagnosis or treatment. It is not a replacement for professional help.” Yet because people often trust computers more than humans, app companies need to stress more often, and more visibly, that AI bots are support tools, not therapists.

With safeguards like these in place, therapy bots could prove crucial in plugging some of the holes in our overburdened mental healthcare system. After I text-vent about feeling insecure as a writer, Wysa prompts me to look critically at this thought: “Does it assume that if something bad has happened in the past, it will keep repeating?” the bot asks. “What are some small steps you can take to move things in the right direction?” This advice, while generic, is basically on target. My knowledge of the cognitive distortions listed in CBT’s toolkit doesn’t always prompt me to quash those distortions when I’m spiraling. The bot’s questions help me reframe my thinking.

Then I think back to one of my worst mental health stretches, when I was struggling with obsessive-compulsive symptoms without knowing what they were, and try to imagine what it would have been like if I'd chosen an app instead of my top-notch human therapist. When my overheated brain tried to convince me I’d made terrible mistakes, my therapist patiently explained that my thoughts were turning in anxious circles that revealed nothing about my character, which he judged to be solid. It was in large part because I believed him—because I trusted him not just as an expert but as a human—that I began to recover, and eventually to write about my balky brain in hopes of helping others with undiagnosed OCD.

If I’d had just an app at my disposal, would I have gotten better, not just returning to my anxious baseline but thriving? Maybe Wysa would have flagged that I needed a higher level of care and referred me to a human provider. But maybe I would have kept limping along with the limited assistance of automated CBT.

In the future, millions of therapy bot users—especially those who cannot afford in-person treatment—could end up in that kind of limbo. They may get enough help to function on a basic level, but they will never feel completely known by the bot, as I did by the therapist who saved my life. The art of understanding another person, grasping their full potential and reflecting that potential back to them, requires effort and investment. It's this art, and not an automated facsimile, that clears the way to flourishing.

This story is part of a series of OpenMind essays, podcasts, and videos supported by a generous grant from the Pulitzer Center's Truth Decay initiative.

November 29, 2023

Elizabeth Svoboda

writes on topics from creationist biology classes in Galápagos schools to the connections between suffering and selflessness. She is the author of What Makes a Hero? (2013) and, for children, The Life Heroic: How to Unleash Your Most Amazing Self (2019). She lives in San Jose, California.

Editor’s Note

In "When Your Psychologist is an AI," author Elizabeth Svoboda investigates the true capabilities of the AI therapy chatbots now replacing overbooked human psychologists and psychotherapists. Are AI therapists biased? Trained? Helpful? Out of their depth? And if you must use one, how can you use it wisely? This essay is part of OpenMind's series on misinformation in brain and behavioral science, supported by a grant from the Pulitzer Center.

It has taken the efforts of many people to produce our series of investigations into the biggest questions and controversies about the human mind. For this article, we would like to acknowledge the crucial work of researcher/fact checker Meg Duff along with our long-standing copy editor, Elise Marton.

Don't miss our OpenMind TikTok on the pros and cons of psychotherapy with an AI, along with an OpenMind podcast and Q&A with psychologist and ethicist Thomas Plante of Santa Clara University.

—Pamela Weintraub and Corey S. Powell, co-editors, OpenMind

Sign up for our newsletter