Skip to content

TikTok: Companies Are Selling AI Therapy. Should You Buy It?

Will a chatbot ever have the empathy to be your therapist, for real?

By Ben Rein

Is AI up to the task of boosting your mental health? Credit: Alamy

"When a real company by the name of Koko tried using chatbots, it didn’t work because people knew they were chatbots. The patients didn’t care when the chatbot said, 'I understand how you’re feeling' because they knew it was an empty, emotionless statement."

@dr.brein Can an AI chatbot really be your therapist? ________ This video was supported by the Pulitzer Center through the Truth Decay Grant Initiative, in collaboration with OpenMind Magazine. To read more about this topic, check out the accompanying article on OpenMind’s website, found in my bio 🔗. #PulitzerCenter #neuroscience #AI #therapy #empathy ♬ Mysterious and sad BGM(1120058) - S and N
Can an AI therapist help you through the day?

Would you let an AI chatbot be your therapist? A recent study wanted to know if this would work, so they asked AI about 200 questions from the “Ask the Doctors” page on Reddit. Then they put those answers next to responses from real human doctors and asked healthcare providers to judge which was better, without knowing which one was AI. What do you think happened? They rated the chatbot’s answers as better 78% of the time and found that they were more likely to be empathetic.

But this raises a key point: empathy.

Everybody knows that ChatGPT can’t feel emotions. Therefore, it’s not capable of empathy, because it can’t really understand what you feel. And some scientists think that this is where AI loses: Chatbots will never work as therapists because humans won’t accept or appreciate empathy from a robot.

When a real company, Koko, tried using chatbots, it didn’t work because people knew they were chatbots. The patients didn’t care when the chatbot said, “I understand how you’re feeling” because they knew it was an empty, emotionless statement.

But it makes me wonder, if chatbots continue gaining in use and acceptance, and we come to respect them more, this could change. And I’m curious how you’d feel about that.

If 100 years from now, AI chatbots are considered trained psychiatrists, would this be good or bad for society? It might seem ridiculous, but it’s real life. Right now, we essentially hold that decision in our hands. We are the first humans to coexist with these large language models, and we actively vote as consumers—with our clicks and our wallets—to determine the future of AI. In what capacity will we come to embrace AI? Where do we draw the line? It’s something to think about as we navigate this new virtual world. Thank you for your interest, and please follow for more science.

December 7, 2023

Ben Rein

PhD, is a Stanford-trained neuroscientist who worked in Robert Malenka’s lab. He currently serves as the Chief Science Officer of the Mind Science Foundation.

Editor’s Note

OpenMind is thrilled to be partnering with neuroscientist and science communicator Ben Rein on a series of TikToks as part of our "Misinformation in Mind" project. In this video, Rein explores the complexities of AI psychotherapy, which has already attracted millions of users. (You can also view this video on Ben Rein's Instagram.)

This TikTok accompanies an in-depth feature on AI therapy by Elizabeth Svoboda, who kicks the tires of the newest wave in psychology to see whether it stands up to the mental health crises facing us today. For another perspective on the pros and cons of therapy bots, check out our OpenMind podcast and Q&A with psychologist and ethicist Thomas Plante of Santa Clara University.

Our misinformation series includes five other essays, along with related podcasts and videos on topics ranging from the myths of trans science to the elusive nature of expertise. It's all part of OpenMind's six-part "Misinformation in Mind" project, supported by a grant from the Pulitzer Center's Truth Decay initiative.

Corey S. Powell and Pamela Weintraub, co-editors, OpenMind

Sign up for our newsletter