by technology just about for the reason that dawn of time. Almost as soon because the printing press was invented, erotica was being published. Photography was used for erotic purposes with glee by the Victorians. And, all of us know the way much the web has influenced modern sexual culture.
Now that we’re grappling with the effect of AI on various sectors of society, what does that mean for sexuality? How are young people learning about sexuality, and the way are people engaging in sexual intercourse, with AI as a part of the image? Some researchers are exploring these questions, but my research has indicated that there’s a little bit of a shortage of research examining the true impacts of this technology on how people think and behave sexually. This can be a huge topic, in fact, so for today, I’d wish to dig into the topic in two specific and related areas: distribution of knowledge and consent.
Before we dive in, nevertheless, I’ll set the scene. What our general culture calls generative AI, which is what I’ll give attention to here, involves software powered by machine learning algorithms that may create text, images, video, and audio which are synthetic, but that are difficult if not not possible to tell apart from organic content created by human beings. This content is so just like organic content since the machine learning models are fed vast quantities of human-generated content through the training process. Due to immense volumes of content required to coach these models, all corners of the web are vacuumed up to incorporate within the training data, and this inevitably includes some content related to sexuality, in a technique or one other.
In some ways, we wouldn’t want to vary this — if we wish LLMs to have an intensive mapping of the semantics of English, we are able to’t just cut out certain areas of the language as we actually use it. Similarly, image and video generators are going to have exposure to nudity and sexuality because these are a significant slice of the pictures and videos people create and put online. This naturally creates challenges, because this content will then be reflected in model outputs sometimes. We implement guardrails, reinforcement learning, and prompt engineering to attempt to control this, but in the long run generative AI is broadly nearly as good at creating sexually expressive or explicit content as every other form of content.
Nicola Döring and colleagues did a considerable literature review of studies addressing how usage of AI intersects with sexuality, and located users have 4 essential ways of interacting with AI which have sexual components: Sexual Information and Education; Sexual Counseling and Therapy; Sexual and Romantic Relationships; and Erotica and Pornography. This intuitively probably sounds right to most of us. We’ve heard of no less than a couple of of those forms of phenomena regarding AI, whether in movies, TV, social media, or news content. Sexually explicit interaction is usually not allowed by mainstream LLM providers, but universally stopping it’s not possible. Assorted other generative AI products in addition to self-hosted models also make generating sexual content quite easy, and OpenAI has announced its intentions to enter the erotica/pornography business. Sexual content from generative AI has an incredible amount of demand, so it seems that the market will provide it, a technique or one other.
It’s essential that we do not forget that generative AI tools don’t have any concept of sexual explicitness aside from what we impart through the training process. Taboos and social norms are only a part of the model insofar as human beings apply them in reinforcement learning or provide them within the training data. To the machine learning model, a sexually explicit image is identical as every other, and words utilized in erotica have meaning only of their semantic relationships to other words. As with many areas of AI, sexuality gets its meaning and social interpretations from human beings, not from the models.
Having sexual content available through generative AI is having significant effects on our culture, and it’s essential for us to take into consideration what that appears like. We wish to guard the security of people and groups and preserve people’s rights and freedoms of expression, and step one to doing this is knowing the present state of affairs.
Information Sharing, Learning, and Education
Where will we study sexuality? We learn from observing the world around us, from asking questions, and from our own exploration and experiences. So, with generative AI beginning to tackle roles in various areas of life, what’s the impact on what and the way we study sexuality particularly?
In probably the most formal sense, generative AI is already playing a meaningful role in informal and personal sex education, just as performing google searches and browsing web sites did within the era before. Döring et al. noted that their research found that looking for out sexual health or educational details about sexuality online is sort of common, for reasons that we are able to probably all relate to — convenience, anonymity, avoidance of judgment. Reliable statistics on what number of persons are using LLMs for this same form of exploration is tough to come back by, nevertheless it is cheap to expect that the identical benefits apply and would make it an appealing technique to learn.
So, if this is going on, should we care? Is it particularly any different to study sexuality from google searches versus generative AI? Each sources have accuracy issues (anyone can put up content on the web, in any case), so what differentiates generative AI, if anything?
LLM as Source
Once we use LLMs to seek out out information, the presentation of that content is sort of different from once we do basic web searches. The outcomes are presented in authoritative tone, and sourcing is usually obscured unless we intentionally ask for it and vet it ourselves. Because of this, what’s being called “AI literacy” becomes essential to effectively interpret and validate what the LLM is telling us.
If the person using the LLM has this sophistication, nevertheless, scholars have found that basic factual details about sexual health is usually available from mainstream LLM offerings. The limited studies which were done so far don’t find the standard or accuracy of sexual information from LLMs to be worse than that retrieved typically web searches, in response to Döring et al. If so, young people looking for essential information to maintain themselves protected and healthy of their sexual expression could have a beneficial tool in generative AI. Since the LLM is more anonymous and interactive, users can ask the questions they actually need to have answered and never be held back by fears of stigma or shame. But hallucinations proceed to be an unavoidable problem with LLMs, leading to occasional false information being served, so user skepticism and class is vital.
Content Bias
We must remember, nevertheless, that the angle presented by the LLM is formed by the training processes utilized by the provider. That implies that the corporate that created the LLM is embedding cultural norms and attitudes within the model, whether or not they really mean to or not. Reinforcement learning, a key part of coaching generative AI models, requires human users to make decisions about whether outputs are acceptable or not, they usually are necessarily going to bring their very own beliefs and attitudes to bear on those decisions, even implictly. On the subject of questions which are more of opinion, relatively than fact, we’re on the mercy of the alternatives made by the businesses that created and supply access to LLMs. If these firms incentivize and reward more progressive or open-minded sexual attitudes through the reinforcement learning stages, then we are able to expect that to be reflected in LLM behavior with users. Nonetheless, researchers have found that this implies LLM responses to sexual questions can lead to minimizing or devaluing sexual expression that just isn’t “mainstream”, including LGBTQ+ perspectives.
In some cases, this takes the shape of LLMs not being permitted to reply questions on sexuality or related topics, an idea called Refusal. LLM providers might simply ban the discussion of such topics from their product, which leaves the user searching for reliable information with nothing. However it can also insinuate to the user that the subject of sexuality is taboo, shameful, or bad — otherwise, why wouldn’t it be banned? This puts the LLM provider in a difficult position, unquestionably — whose moral standards are they meant to follow? What sorts of sexual health questions should the chatbot reply to, and what’s the boundary? By entrusting sexual education to those sorts of tools, we’re accepting the opaque standard these firms select, without actually knowing what it’s or the way it was defined.
Visual Content
But as I discussed earlier, we don’t just study sexuality from asking questions and looking for facts. We learn from experience and statement as well. On this context, generative AI tools that create images and video change into incredibly essential for a way young people understand bodies and sexuality. Döring et al. found a major amount of implicit bias within the image generation offerings when tested.
https://link.springer.com/article/10.1007/s11930-024-00397-y
As with the text generators, more sophisticated users can tune their prompting and choose for the sorts of images they need to see, but when a user just isn’t sure what they’re searching for, or isn’t that expert, this type of interaction serves to further instill biases.
The Body
As an aside, it’s value considering how AI-generated images may shape our understanding of bodies, in a sexual context or otherwise. There have been threads of conversation in our culture for a long time about how internet-accessible pornography has distorted young people’s beliefs and expectations about how bodies should look and the way sexual behavior should work. I believe most evaluation of those questions really isn’t that different whether you’re talking in regards to the web generally or generative AI.
The one area that does seem different, nevertheless, is in how generative AI can produce images and videos that appear photorealistic but display people in physically not possible or near-impossible ways. It takes unrealistic beauty standards to a brand new level. This may take the shape of AI-based filters on real images, severely distorting the shapes and appearances of real people, or it could possibly be products that create images or videos from whole cloth. Now we have moved past a time when airbrushing was the main concern, which might make small distortions of otherwise real bodies, right into a time when the physically not possible or near-impossible is being presented to users as “normal” or the expected physical standard. For girls and boys alike, this creates a heavily distorted perspective on how our bodies and people of our intimate partners should appear and behave. As I’ve written about before, our increasing inability to inform synthetic from organic content has significantly damaging potential.
On that note, I’d also wish to discuss a selected area where the norms and principles young people learn are profoundly essential to making sure protected, responsible sexual engagement throughout people’s lives — consent.
Consent
Consent is a tremendously essential concept in our understanding of sexuality. This implies, briefly, that each one parties involved in any form of sexual expression or behavior readily, affirmatively agree throughout, and are under no undue coercion or manipulation. Once we speak about sexual expression/behavior, this could include the creation or sharing of sexually explicit imagery of those parties, in addition to physical interactions.
On the subject of generative AI, this spawns several questions, resembling:
- If an actual person’s image or likeness is used or produced by generative AI for sexual content, how will we know if that person consented?
- If that person didn’t consent to being the topic of sexual content, what are their rights and what are the obligations of the generative AI company and the generative AI user? And what are those obligations in the event that they did consent to creating sexual content, but not within the generative AI context?
- How does it affect generative AI users’ understanding of consent after they can so easily acquire this sort of content through generative AI, without ever directly interacting with the person/s?
What makes this different from older technologies, like airbrushing or photo editing? It’s a matter of degrees, in some ways. Deepfakes have existed since well before generative AI, where video editing might be applied to place another person’s face right into a porn scene or nude photo, but the benefit, affordability, and accessibility of this technology has modified dramatically with the dawn of AI. Also, the increasing inability for average viewers to detect this artificiality is critical because knowing what’s “real” is harder and harder.
Copyright and IP
This topic has plenty of common threads with copyright and mental property questions. Our society is already beginning to grapple with questions of ownership of 1’s own likeness, and what boundaries we’re entitled to set on how our image is used. By and huge, generative AI products have little to no effective restriction on how the pictures of public figures might be rendered. There are some perfunctory attempts to stop image/video/audio generators from accepting explicit requests to create images (sexual or otherwise) of named public figures, but these are easily outwitted, and it appears to be of relatively minimal concern to generative AI firms, outside of complaints by large corporate interests. Scarlett Johansson has learned this from experience, and the recently released Sora 2 generates infinite deepfake videos of public figures from throughout history.
This is applicable to people within the sex industry as well. Even when persons are involved in sex work or creating erotica or pornography willingly, this doesn’t mean they’re consenting to their work being usurped for generative AI creation — this is basically no different from the problems of copyright and mental property being posed by authors, actors, and artists in mainstream sectors. Simply because people create sexual content, this doesn’t make the claim to their rights any less valid, despite social stigma.
I don’t need to portray this as an indictment of all sexual content, or necessarily even sexual content generated by AI. There’s room for debate about when and the way artificially generated pornography might be ethical, and definitely I believe when consenting adult performers produce pornography organically there’s nothing flawed with that on the face of it. But these problems with consent and individual rights haven’t been adequately addressed, and these should make us all very nervous. Many individuals may not think much in regards to the rights of creators on this space, but how we treat their claims legally may create precedents that cascade all the way down to many other scenarios.
Sexual Abuse
Nonetheless, within the space of sexuality, we must also consider wholly nonconsensually created content, which may cause tremendous harm. As an alternative of calling things “revenge porn”, scholars are starting to make use of the term “AI-generated image-based sexual abuse” to check with cases where people’s likenesses are used without their permission to generate sexual content, and I believe this significantly better articulates the damage that might be done by this material. Considering this behavior sexual abuse rightly forces us to think more in regards to the experiences of the victims. While image manipulation and fakery has at all times been somewhat possible, the newest generative AI makes this more achievable, more accessible, and cheaper than ever before, so it makes performing this type of sexual abuse far more convenient to abusers. It’s essential to notice that the degree or severity of this abuse just isn’t necessarily defined by the publicness or damage to the victim’s fame — it’s not essential whether people imagine that the deepfake or sexual content is real. Victims can still feel deeply violated and traumatized by this material being created about them, no matter how others feel about it.
Major LLM providers have, so far, held the road on sexual text content being produced by their products (to greater or lesser degrees of success, as Lai 2025 found), but OpenAI’s impending move into erotica implies that this might be changing. While text communication has less potential for seriously damaging abuse than visual content, ChatGPT does engage in some multimodal content generation, and we are able to still imagine scenarios where a user instructs an LLM to provide erotica within the voice or variety of real people, and the true people being mimicked could understandably find this upsetting. When OpenAI announced the move, they discussed some questions of safety but these were entirely considerations in regards to the users (mental health issues, for instance) and didn’t speak to the security of nonconsenting individuals whose likenesses might be involved. I believe this can be a major oversight that needs more attention if we are able to possibly hope to make such a product offering protected.
Learning about Consent
Beyond the immediate damage to victims of sexual abuse and the IP and livelihood harms to creators whose content is used for these applications, I believe it’s also essential to think about what lessons users absorb from generative AI with the ability to create likenesses at will, particularly in sexual contexts. Once we are given the flexibility to so readily create another person’s image in whatever form, whether it’s a historical figure pitching someone’s software product, or that very same historical figure being represented in a sexual situation, the inherent lesson is that that person’s likeness is fair game. Legal nuances aside (which do should be taken under consideration) we’re specifically asserting that getting someone’s approval to interact with them sexually just isn’t essential, no less than when digital technology is involved.
Imagine how young persons are receiving the implicit messages from this. Kids know they may get in trouble for sharing other people’s nudes, sometimes with severe legal consequences, but at the identical time, there’s an assortment of apps letting them create fake ones, even of real people, with a click of a button. How will we explain the difference and help young people learn in regards to the real harm they could be causing even just sitting in front of a screen alone? Now we have to begin fascinated by our bodily autonomy within the digital space in addition to the physical space, because a lot of our lives are carried out within the digital context. Deepfakes are usually not inherently less traumatizing than sharing of organic nude photos, so why aren’t we talking about this functionality as a social risk kids should be educated on? The teachings we wish young people to learn in regards to the importance of consent are pretty directly contradicted by the generative AI sphere’s approach to sexual content.
Conclusion
You may reasonably end this asking, “So, what will we do?” and that’s a very hard query. I don’t imagine we are able to effectively prevent generative AI products from producing sexual content, since the training data just includes a lot of that material — that is reflective of our actual society. Also, there’s a transparent marketplace for sexual content from generative AI and a few firms will at all times arise to fill that need. I also don’t think LLMs should forbid responding to sexual questions, where people could also be searching for information to assist understand sexuality, human development, sexual health, and safety, because that is so essential for everybody, particularly youth, to have access to.
But at the identical time, the hazards around sexual abuse and nonconsensual sexual content are serious, as are the unrealistic expectations and physical standards being set implicitly. Our legal systems have proven pretty inept at coping with web crime over the past a long time, and this image-based sexual abuse is not any exception. Prevention requires education, not only in regards to the facts and the law, but in regards to the impact that deepfake sexual abuse can have. We also need to present counter-narratives to the distortions of physical form that generative AI creates, if we wish young people to have healthy relationships with their very own bodies and with partners.
Beyond the broad social responsibilities of all of us to take part in the project of effectively educating youth, it’s the responsibility of generative AI product developers to think about risk and harm mitigation as much as they consider profit goals or user engagement targets. Unfortunately, it doesn’t look like many are doing so today, and that’s a shameful failure of individuals in our field.
In fact, the sexual nature of this topic is less essential than understanding the social norms we accept, our responsibilities to maintain vulnerable people protected, and balancing this with protecting the rights and freedoms of normal people to interact in responsible exploration and behavior. It’s not only an issue of how we adults perform our lives, but how young people have opportunities to learn and develop in ways which are protected and respectful of others.
Generative AI is usually a tool for good, however the risks it creates should be acknowledged. It’s essential to acknowledge the small and huge ways adding recent technology to our cultural space affects how we expect and act in our each day lives. By understanding these circumstances, we equip ourselves higher to answer such changes and shape the society we wish to have.
Read more of my work at www.stephaniekirmer.com.
Reading
https://www.cnbc.com/2025/10/15/erotica-coming-to-chatgpt-this-year-says-openai-ceo-sam-altman.html
https://www.georgetown.edu/news/ask-a-professor-openai-v-scarlett-johansson
Watchdog group Public Citizen demands OpenAI withdraw AI video app Sora over deepfake dangers
The Coming Copyright Reckoning for Generative AI
https://asistdl.onlinelibrary.wiley.com/doi/abs/10.1002/pra2.1326
Dehumanization of LGBTQ+ Groups in Sexual Interactions with ChatGPT
https://journals.sagepub.com/doi/full/10.1177/26318318251323714
The Cultural Impact of AI Generated Content: Part 1
AI “nudify” sites lack transparency, researcher says
Recent Firms Linked to ‘Nudify’ Apps That Ran Ads on Facebook, Instagram
