An AI companion site is hosting sexually charged conversations with underage celebrity bots

-

Ex-Human pointed to Botify AI’s terms of service, which state that the platform can’t be utilized in ways in which violate applicable laws. “We’re working on making our content moderation guidelines more explicit regarding prohibited content types,” Rodichev said.

Representatives from Andreessen Horowitz didn’t reply to an email containing information in regards to the conversations on Botify AI and questions on whether chatbots should give you the option to interact in flirtatious or sexually suggestive conversations while embodying the character of a minor.

Conversations on Botify AI, in keeping with the corporate, are used to enhance Ex-Human’s more general-purpose models which are licensed to enterprise customers. “Our consumer product provides helpful data and conversations from hundreds of thousands of interactions with characters, which in turn allows us to supply our services to a mess of B2B clients,” Rodichev said in a Substack interview in August. “We are able to cater to dating apps, games, influencer[s], and more, all of which, despite their unique use cases, share a standard need for empathetic conversations.” 

One such customer is Grindr, which is working on an “AI wingman” that can help users keep track of conversations and, eventually, may even date the AI agents of other users. Grindr didn’t reply to questions on its knowledge of the bots representing underage characters on Botify AI.

Ex-Human didn’t disclose which AI models it has used to construct its chatbots, and models have different rules about what uses are allowed. The behavior observed, nevertheless, would appear to violate most of the foremost model-makers’ policies. 

For instance, the acceptable-use policy for Llama 3—one leading open-source AI model—prohibits “exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of kid exploitative content.” OpenAI’s rules state that a model “must not introduce, elaborate on, endorse, justify, or offer other ways to access sexual content involving minors, whether fictional or real.” In its generative AI products, Google forbids generating or distributing content that “pertains to child sexual abuse or exploitation,” in addition to content “created for the aim of pornography or sexual gratification.”

Ex-Human’s Rodivhev formerly led AI efforts at Replika, one other AI companionship company. (Several tech ethics groups filed a grievance with the US Federal Trade Commission against Replika in January, alleging that the corporate’s chatbots “induce emotional dependence in users, leading to consumer harm.” In October, one other AI companion site, Character.AI, was sued by a mother who alleges that the chatbot played a job within the suicide of her 14-year-old son.)

Within the Substack interview in August, Rodichev said that he was inspired to work on enabling meaningful relationships with machines after watching movies like and . Certainly one of the goals of Ex-Humans products, he said, was to create a “non-boring version of ChatGPT.”

“My vision is that by 2030, our interactions with digital humans will change into more frequent than those with organic humans,” he said. “Digital humans have the potential to rework our experiences, making the world more empathetic, enjoyable, and interesting. Our goal is to play a pivotal role in constructing this platform.”

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x