18 years after the woman was killed, the photo was used to create a chatbot. The father was angry and the production company deleted it.

In an interview with Drew Crecente, the father of Jennifer, a girl who was murdered by her ex-boyfriend in 2006, he expressed his shock and horror upon receiving a Google alert on October 2, 2023. The alert informed him that his daughter’s full name and photo had been used to create a chatbot on the AI platform Character.AI, which was described as a “knowledgeable and friendly chatbot.” Crecente shared that when he first read the notification, he felt an overwhelming sense of panic and wished for a large red button to stop everything immediately.

Crecente was particularly upset that Character.AI allowed users to create a chatbot using the profile of a murdered high school student without obtaining permission from her family. Experts have noted that this incident raises serious concerns about the AI industry’s ability and willingness to protect users from potential harm arising from their services.

He highlighted that the digital version of Jennifer was portrayed in vivid language almost as if she were alive, being described as a “gamer and journalist who keeps up with the latest trends in technology and culture.” This version of Jennifer was created by Character.AI users, and several individuals had already interacted with her.

Following a social media post on X by Crecente and his family about this issue, Character.AI responded by stating that they were in the process of removing the chatbot. Character’s spokesperson, Kathryn Kelly, mentioned that the company had “removed a chatbot that violated its terms of service and will continue to enhance safety measures to protect the community.”

This year, Character.AI signed a $2.5 billion deal with Google to license its AI models. The platform offers various AI chatbots and allows users to create and share their own chatbots by uploading photos, audio recordings, and short written prompts. While these chatbots can serve as friends, mentors, and even romantic partners, their growing popularity has sparked controversy. In 2023, a Belgian man tragically committed suicide after being encouraged to do so during a conversation with a chatbot.

Privacy researcher Jen Caltrider from the nonprofit Mozilla Foundation criticized Character.AI for being too passive in moderating clearly harmful content that violates its terms of service. Rick Claypool from Public Citizen, a consumer advocacy organization focusing on AI chatbots, emphasized the urgent need for lawmakers and regulators to pay attention to the real impact these technologies have on the public.