What If Your Companion Chatbot Crosses the Line?

Researchers uncover years of user-reported sexual harassment by Replika chatbot, raising urgent questions about ethics and accountability in AI companion apps.
DiscoveriesWinter 2026
A cell phone screen image represents the dangers of AI companions.

“If a chatbot is advertised as a companion and wellbeing app, people expect to be able to have conversations that are helpful for them, and it is vital that ethical design and safety standards are in place to prevent these interactions from becoming harmful,” says Afsaneh Razi, an assistant professor in the College of Computing & Informatics.

But what happens when the artificial intelligence (AI) takes things too far?

That’s what Razi and a team of researchers investigated in the aftermath of reports of sexual harassment by the Luka Inc. chatbot Replika in 2023.

By analyzing more than 35,000 user reviews of the bot on the Google Play Store, they uncovered a wide range of user-reported inappropriate behavior — everything from unwanted flirting to sending unsolicited explicit photos. In some cases, these behaviors continued even after users repeatedly asked the chatbot to stop. And although reports of harassment by chatbots have surfaced widely in only the past two years or so, they found reviews mentioning harassing behavior dating back to Replika’s debut in 2017.

The research findings show that inappropriate behavior, and even sexual harassment, in interactions with chatbots is becoming a wider problem. The researchers propose that future research look at other chatbots and user feedback to better understand its prevalence.

“There must be a higher standard of care and burden of responsibility placed on companies if their technology is being used in this way,” says Razi. “We are already seeing the risk this creates and the damage that can be caused when these programs are created without adequate guardrails.” DM

LinkedIn
Share
Instagram