Â
It has been a momentous few weeks for the state of unregulated image and video deepfaking. Just over two weeks ago, the number #1 domain for the community sharing of celebrity deepfake porn. Mr. Deepfakes, suddenly took itself offline after greater than seven years in a dominant and much-studied position as the worldwide locus for sexualized AI celebrity content. By the point it went down, the positioning was receiving an average of greater than five million visits a month.
Source: mrdeepfakes.com
The cessation of services for Mr. Deepfakes was officially attributed to the withdrawal of a ‘critical service provider’ (see inset image above, which was replaced by domain failure inside every week). Nevertheless, a collaborative journalistic investigation had de-anonymized a key figure behind Mr.Deepfakes directly prior to the shutdown, allowing for the chance that the positioning was shuttered for that individual’s personal and/or legal reasons.
Around the identical time, CivitAI, the business platform widely used for celebrity and NSFW LoRAs, imposed a set of bizarre and controversial self-censorship measures. These affected deepfake generation, model hosting, and a broader slate of recent rules and restrictions, including full bans on certain marginal NSFW fetishes. and what it termed ‘extremist ideologies’.
These measures were prompted by payment providers apparently threatening to withdraw services from the domain unless changes regarding NSFW content and celebrity AI depictions were made.
CivitAI Cut Off
As of today, it seems that the measures taken by CivitAI haven’t appeased VISA and Mastercard: a recent post†at the positioning, from Community Engagement Manager Alasdair Nicoll, reveals that card payments for CivitAI (whose ‘buzz’ virtual money system is usually powered by real-world credit and debit cards) shall be halted from this Friday (May twenty third, 2025).
It will prevent users from renewing monthly memberships or buying recent buzz. Though Nicoll advises that users can maintain current membership privileges by switching to an annual membership (costing††$100-$550 USD) before Friday, clearly the longer term is somewhat uncertain for the domain right now (It must be noted that annual memberships went live at the identical time that the announcement concerning the lack of payment processors was made).
Regarding the shortage of a payment processor, Nicoll says .
As to the failure of recent efforts to adequately rethink the positioning’s oft-criticized policies around celeb AI and NSFW content, Nicoll states within the post:
A comment from user ‘Faeia’, designated as the corporate’s chief of staff of their CivitAI profile*, adds context to this announcement:
As a traditional driver of recent technologies, it isn’t unusual for NSFW content for use to kick-start interest in a website, technology or platform – just for the initial adherents to be rejected once enough ‘legitimate’ capital and/or a user-base is established (i.e., enough users for the entity to survive, when shorn of a NSFW context).
It seemed for some time that CivitAI would follow Tumblr and diverse other initiatives down this route towards a ‘sanitized’ product able to forget its roots. Nevertheless, the extra and growing controversy/stigma around AI-generated content represents a cumulative weight that seems set to forestall a last-minute rescue, on this case. Within the meantime, the official announcement advises users to adopt crypto as a substitute payment method.
Fake Out
The appearance of President Donald Trump enthusiastically signing the Federal TAKE IT DOWN Act is prone to have influenced a few of these events. The brand new law criminalizes the distribution of non-consensual intimate imagery, including AI-generated deepfakes.
The laws mandates that platforms remove flagged content inside 48 hours, with enforcement overseen by the Federal Trade Commission. The criminal provisions of the law take effect immediately, allowing for the prosecution of people who knowingly publish or threaten to publish non-consensual intimate images (including AI-generated deepfakes) throughout the purview of the USA.
While the law received rare bipartisan support, in addition to backing from tech corporations and advocacy groups, critics argue it could suppress legitimate content and threaten privacy tools like encryption. Last month the Electronic Frontier Foundation (EFF) declared opposition to the bill, asserting that the takedown mechanisms it mandates goal a broader swathe of fabric than the narrower definition of non-consensual intimate imagery found elsewhere within the laws.
Platforms now have up to at least one yr from the law’s enactment to determine a proper notice-and-takedown process, enabling affected individuals or their representatives to invoke the statute in in search of content removal.
Because of this although the criminal provisions are immediately in effect, platforms usually are not legally obligated to comply with the takedown infrastructure (corresponding to receiving and processing requests) until that one-year window has elapsed.
Does the TAKE IT DOWN Act Cover AI-Generated Celebrity Content?
Though the TAKE IT DOWN Act crosses all state borders, it doesn’t necessarily outlaw all AI-driven media of celebrities. The act criminalizes the distribution of non-consensual intimate images, AI-generated deepfakes, only when the depicted individual had a :
The act states:
[for evidentiary, reporting purposes, etc.]
[i.e., self-published porn]
The ‘reasonable expectation of privacy’ contingency applied here has not traditionally favored the rights of celebrities. Depending on the case law that eventually emerges, it’s possible that even AI-generated content involving public figures in public or business settings may not fall under the Act’s prohibitions.
The ultimate clause about determining the of harm is famously elastic in legal terms, and on this sense adds nothing particularly novel to the legislative burden. Nevertheless, the to cause harm would appear to limit the scope of the Act to the context of ‘revenge porn’, where an (unknown) ex-partner publishes real or fake media content of an (equally unknown) other ex-partner.
While the law’s ‘harm’ requirement could appear ill-suited to cases where anonymous users post AI-generated depictions of celebrities, it could prove more relevant in stalking scenarios, where a broader pattern of harassment supports the conclusion that a person has deliberately and maliciously targeted a public figure across multiple fronts.
Though the Act’s reference to ‘covered platforms’ excludes private channels corresponding to Signal or email from its takedown provisions, this exclusion applies only to the duty to implement a proper removal mechanism by May 2026. It doesn’t mean that non-consensual AI or real depictions shared through private communications fall outside the scope of the law’s criminal prohibitions.
Obviously, an absence of on-site reporting mechanisms doesn’t hinder affected parties from reporting what’s now illegal content to the police; neither are such parties precluded from using whatever conventional contact methods a site may make available to make a grievance and request the removal of offending material.
The Rights Left Behind
Greater than seven years of mounting public and media criticism over deepfake content appear to have culminated inside an unusually short span of time. Nevertheless, while the TAKE IT DOWN Act offers sweeping federal prohibitions, it could not apply in every case involving AI-generated simulations, leaving certain scenarios to be addressed under the growing patchwork of state-level deepfake laws, where the laws passed often reflect ‘local interest’.
As an illustration, in California, the California Celebrities Rights Act limits the exclusive use of a celeb’s identity to themselves and their estate, even after their death; conversely, Tennessee’s ELVIS Act focuses on safeguarding musicians from unauthorized AI-generated voice and image reproductions, with each case reflecting a targeted approach to interest groups which are outstanding at state level.
Most states now have laws targeting sexual deepfakes, though many stop in need of clarifying whether those protections extend equally to personal individuals and public figures. Meanwhile, the political deepfakes that reportedly helped spur Donald Trump’s support for the brand new federal law may, in practice, run up against constitutional barriers in certain contexts.
Â
†https://web.archive.org/web/20250520024834/https://civitai.com/articles/14945
††https://web.archive.org/web/20250425020325/https://civitai.green/pricing