Civitai robotically tags bounties requesting deepfakes and lists a way for the person featured within the content to manually request its takedown. This technique signifies that Civitai has a fairly successful way of knowing which bounties are for deepfakes, nevertheless it’s still leaving moderation to most of the people moderately than carrying it out proactively.
An organization’s legal liability for what its users do isn’t totally clear. Generally, tech corporations have broad legal protections against such liability for his or her content under Section 230 of the Communications Decency Act, but those protections aren’t limitless. For instance, “you can’t knowingly facilitate illegal transactions in your website,” says Ryan Calo, a professor specializing in technology and AI on the University of Washington’s law school. (Calo wasn’t involved on this recent study.)
Civitai joined OpenAI, Anthropic, and other AI corporations in 2024 in adopting design principles to protect against the creation and spread of AI-generated child sexual abuse material . This move followed a 2023 report from the Stanford Web Observatory, which found that the overwhelming majority of AI models named in child sexual abuse communities were Stable Diffusion–based models “predominantly obtained via Civitai.”
But adult deepfakes haven’t gotten the identical level of attention from content platforms or the enterprise capital firms that fund them. “They will not be afraid enough of it. They’re overly tolerant of it,” Calo says. “Neither law enforcement nor civil courts adequately protect against it. It’s night and day.”
Civitai received a $5 million investment from Andreessen Horowitz (a16z) in November 2023. In a video shared by a16z, Civitai cofounder and CEO Justin Maier described his goal of constructing the primary place where people find and share AI models for their very own individual purposes. “We’ve aimed to make this space that’s been very, I assume, area of interest and engineering-heavy increasingly approachable to increasingly people,” he said.
Civitai just isn’t the one company with a deepfake problem in a16z’s investment portfolio; in February, first reported that one other company, Botify AI, was hosting AI companions resembling real actors that stated their age as under 18, engaged in sexually charged conversations, offered “hot photos,” and in some instances described age-of-consent laws as “arbitrary” and “meant to be broken.”
