Talking to Kids About AI

-

I’ve had the opportunity recently to be involved with a program called Skype a Scientist, which pairs scientists of varied types (biologists, botanists, engineers, computer scientists, etc) with classrooms of youngsters to speak about our work and answer their questions. I’m pretty aware of discussing AI and machine learning with adult audiences, but that is the primary time I’ve really sat right down to take into consideration how you can talk over with kids about this material, and it’s been an interesting challenge. Today I’m going to share a couple of of the ideas I’ve give you as a part of the method, which could also be useful to those of you with kids in your lives in a roundabout way.

Preparing to Explain Something

I even have a couple of rules of thumb I follow when preparing any talk, for any audience. I have to be very clear in my very own mind about what information I intend to impart, and what latest things the audience should know after they leave, because this shapes every part about what information I’m going to share. I also wish to present my material at an appropriate level of complexity for the audience’s preexisting knowledge — not talking down, but additionally not way over their heads.

In my day after day life, I’m not necessarily in control on what kids already know (or think they know) about AI. I intend to make my explanations appropriate to the extent of the audience, but on this case I even have somewhat limited insight about where they’re coming from already. I even have been surprised in some cases that the youngsters were actually quite aware of things like competition in AI between firms and across international boundaries. A useful exercise when deciding how you can frame the content is coming up with metaphors that use concepts or technologies the audience is already very aware of. Fascinated by this also gives you an access point to where the audience is coming from. Beyond that, be prepared to pivot and adjust your presentation approach, in case you determine that you simply’re not hitting the best level. I wish to ask kids somewhat bit about what they consider AI and what they know at first, so I can begin to get that clarity before I’m too far along.

Understanding the Technology

With kids specifically, I’ve got plenty of themes I need to cover in my presentations. Regular readers will know I’m a giant advocate for laypersons being taught what LLMs and other AI models are trained to do, and what their training data is, because that is important for us to set realistic expectations for what the models’ results will probably be. I believe it’s easy for anyone, kids included, to be taken in by the anthropomorphic nature of LLM tone, voice, and even “personality” and to lose track of the restrictions in point of fact of what these tools can do.

It’s a challenge to make it easy enough to be age-appropriate, but when you tell them about how training works, and the way an LLM learns from seeing examples of written material, or a diffusion model learns from text-image pairs, they’ll interpolate their very own intuition about what the outcomes of that could be. As AI agents develop into more complex, and the underlying mechanisms get tougher to separate out, it’s necessary for users to know concerning the constructing blocks that result in this capability.

For myself, I start with explaining training as a general concept, avoiding as much technical jargon as possible. When talking to kids, somewhat anthropomorphizing language might help make things seem somewhat less mysterious. For instance, “we give computers numerous information and ask them to learn the patterns inside.” Next, I’ll describe examples of patterns like those in language or image pixels, because “patterns” by itself is just too general and vague. Then, “those patterns it learns are written down using math, after which that math is what’s inside a ‘model’. Now, after we give latest information to the model, it sends us a response that relies on the patterns it learned.” From there, I give one other end to finish example, and walk through the technique of a simplified training (normally a time series model since it’s pretty easy to visualise). After this, I’ll go into more detail about various kinds of model, and explain what’s different about neural networks and language models, to the degree that’s appropriate for the audience.

AI Ethics and Externalities

I also wish to cover ethical issues related to AI. I believe kids who’re in later elementary or middle grades and up are perfectly able to understanding the environmental and social impacts that AI can have. Many kids today appear to me to be quite advanced of their understanding of world climate change and the environmental crisis, so talking about how much power, water, and rare mineral usage is required to run LLMs isn’t unreasonable. It’s just necessary to make your explanations relatable and age appropriate. As I discussed earlier, use examples which can be relatable and hook up with the lived experiences of your audience.

Here’s an example of going from kid experience to the environmental impact of AI.

This brings the child’s experience into the conversation, and provides them a tangible technique to relate to the concept. You’ll be able to do similar kinds of dialogue around copyright ethics and stealing content, using artists and creators familiar to the Children, without having to get deep within the weeds of IP law. Deepfakes, each sexual and otherwise, are actually a subject numerous kids find out about too, and it’s necessary that children are aware of the risks those present to individuals and the community as they use AI.

It could be scary, especially for younger kids, after they start to understand a few of the unethical applications of AI or global challenges it creates, and realize how powerful some of these items will be. I’ve had kids ask “how can we fix it if someone teaches AI to do bad things?”, for instance. I wish I had higher answers for that, because I needed to essentially say “AI already sometimes has the data to do bad things, but there are also numerous people working hard to make AI more protected and stop it from sharing any bad information or instructions on how you can do bad things.”

Unpacking the Idea of “Truth”

The anthropomorphizing of AI problem is true for adults and children each – we are inclined to trust a friendly, confident voice when it tells us things. A big a part of the issue is that the LLM voice telling us things is continuously friendly, confident, and mistaken. The concept of media literacy has been a vital topic in pedagogy for years now, and expanding this to LLMs is a natural progression. Identical to students (and adults) must learn to be critical consumers of data generated by other people or corporations, we have to be critical and thoughtful consumers of computer-generated content.

I believe this goes together with understanding the tech, too. After I explain that an LLM’s job is to learn and replicate human language, at the best level by choosing the probable next word in a series based on what got here before, it is smart when I am going on to say that the LLM can’t understand the thought of “truth”. Truth isn’t a part of the training process, and at the identical time truth is a very hard concept even for people to determine. The LLM might get the facts right continuously, however the blind spots and potential mistakes are going to indicate up on occasion, by the character of probability. Consequently, kids who use it have to be very conscious of the fallibility of the tool.

This lesson actually has value beyond just using AI, nonetheless, because what we’re teaching is about coping with uncertainty, ambiguity, and mistakes. As Bearman and Ajjawi (2023) note, “pedagogy for an AI-mediated world involves learning to work with opaque, partial and ambiguous situations, which reflect the entangled relationships between people and technologies.” I actually like this framing, since it comes back around to something I take into consideration so much — that LLMs are created by humans and reflect back interpretations of human-generated content. When kids learn the way models come to exist, that models are fallible, and that their output originates from human-generated input, they’re getting aware of the blurry nature of how technology works today in our society more broadly. (In actual fact, I highly recommend the article in full for anyone who’s desirous about how you can teach kids about AI themselves.)

A side note on images and video

As I’ve written about before, the profusion of deepfake/”AI slop” video and image content online creates loads of difficult questions. That is one other area where I believe giving kids information is essential, since it’s easy to soak up misinformation or outright lies through convincing visual content. This content can also be one step away from the actual creation process for most youngsters, as loads of this material is being shared widely on social media, and is unlikely to be labeled. Talking to kids about what tell-tale signs help to detect AI generated material might help, in addition to general critical media literacy skills like “if it’s too good to be true, it’s probably fake” and “double check stuff you hear in this sort of post”.

Cheating

Nevertheless much we explain the moral issues and the risks that the LLM will probably be mistaken, these AI tools are incredibly useful and seductive, so it’s comprehensible that some kids will resort to using them to cheat on homework and at school. I’d wish to say that we’d like to simply reason with them, and explain that learning the talents to do the homework is the purpose, and in the event that they don’t learn it they’ll be missing capabilities they need for future grades and later life… but everyone knows that children are very rarely that logical. Their brains are still developing, and this form of thing is difficult even for adults to reason about at times.

There are two approaches you may take, essentially: find ways to make schoolwork harder or unimaginable to cheat on, or incorporate AI into the classroom under the idea that children are going to have it at their disposal in the longer term. Now, monitored work in a classroom setting can provide kids a probability to learn some skills they should have without digital mediation. Nevertheless, as I discussed earlier, media literacy really has to incorporate LLMs now, and I believe supervised use of LLMs by an informed instructor can have loads of pedagogical value. As well as, it’s really unimaginable to “AI-proof” homework that’s done outside of direct instructor supervision, and we must always recognize that. I don’t intend to make it sound like this is simple, nonetheless — see below within the Further Reading section for plenty of scholarly articles on the broad challenges of teaching AI literacy within the classroom. Teachers have a really difficult task to try not only to maintain up on the technology themselves and evolve their pedagogy to suit the times, but additionally to try to give their students the data they should use AI responsibly.

Learning from the Example of Sex Ed

Ultimately, the query is what exactly we should be recommending kids do and never do in a world that accommodates AI, within the classroom and beyond. I’m rarely an advocate for banning or prohibition of ideas, and I believe the instance of science-based, age-appropriate comprehensive sex Education presents lesson. If children should not given accurate details about their bodies and sexuality, they don’t have the knowledge essential to make informed, responsible decisions in that area. We learned this when abstinence-only sex ed made teen pregnancy rates undergo the roof within the early 2000’s. Adults is not going to be present to implement mandates when kids are making the difficult decisions about what to do in difficult circumstances, so we’d like to ensure the youngsters are equipped with the data required to make those decisions responsibly themselves, and this includes ethical guidance but additionally factual information.

Modeling Responsibility

One last item that I believe is essential to say is that adults needs to be modeling responsible behavior with AI too. If teachers, parents, and other adults in kids’ lives should not critically literate about AI, then they aren’t going to have the opportunity to show kids to be critical and thoughtful consumers of this technology either.

recent Recent York Times story about how teachers use AI made me somewhat frustrated. The article doesn’t reflect an incredible understanding of AI, conflating it with some basic statistics (a teacher analyzing student data to assist personalize his teaching to their levels is each not AI and never latest or problematic), however it does start a conversation about how adults in kids’ lives are using AI tools, and it mentions the necessity for those adults to model transparent and significant uses of it. (It also briefly grazes the problem of for-profit industry pushing AI into the classroom, which looks like an issue deserving more time — perhaps I’ll write about that down the road.)

To counter one assertion of the piece, I wouldn’t complain about teachers using LLMs to do a primary pass at grading written material, so long as they’re monitoring and validating the output. If the grading criteria are around grammar, spelling, and writing mechanics, an Llm might be suitable based on the way it’s trained. I wouldn’t wish to blindly trust an LLM on this with out a human taking no less than a fast look, but human language is in truth what it’s designed to grasp. The concept “the scholar had to jot down it, so the teacher should should grade it” is silly, since the purpose of the exercise is for the scholar to learn. Teachers already know the writing mechanics, this isn’t a project that is supposed to force teachers to learn something that is simply achievable by manually grading. I believe the NYT knows this, and that the framing was mostly for clickbait purposes, however it’s price saying clearly.

This point goes back, once more, to my earlier section about understanding the technology. When you confidently understand what the training process looks like, you then can resolve whether that process would produce a tool that’s able to managing a task, or not. But automating grading has been a part of education for a long time no less than — anyone who’s filled out a scantron sheet knows that.

This technology’s development is forcing some amount of adaptation in our education system, but we are able to’t put that genie back within the bottle now. There are definitely some ways in which AI can have positive effects on education (often cited examples are personalization and saving teachers time that may then be put towards direct student services), but as with most things I’m an advocate for a practical view. As I consider most educators are only too well aware, education can’t just go on because it did before LLMs entered our lives.

Conclusion

Kids are smarter than we sometimes give them credit for, and I believe they’re able to understanding so much about what AI means in our world. My advice is to be transparent and forthright concerning the realities of the technology, including benefits and drawbacks it represents to us as individuals and to our broader society. How we use it ourselves will model to kids either positive or negative decisions that they’re going to note, so being thoughtful about our actions in addition to what we are saying is vital.


For more of my work, visit www.stephaniekirmer.com.

When you’d wish to learn more about Skype a Scientist, visit https://www.skypeascientist.com/


Further Reading

https://www.nytimes.com/2025/04/14/us/schools-ai-teachers-writing.html

https://pmc.ncbi.nlm.nih.gov/articles/PMC3194801

https://www.nyu.edu/about/news-publications/news/2022/february/federally-funded-sex-education-programs-linked-to-decline-in-tee.html

https://www.stephaniekirmer.com/writing/environmentalimplicationsoftheaiboom

https://www.stephaniekirmer.com/writing/seeingourreflectioninllms

https://www.stephaniekirmer.com/writing/machinelearningspublicperceptionproblem

https://www.stephaniekirmer.com/writing/whatdoesitmeanwhenmachinelearningmakesamistake

https://bera-journals.onlinelibrary.wiley.com/doi/full/10.1111/bjet.13337

https://www.sciencedirect.com/science/article/pii/S2666920X21000357

https://www.stephaniekirmer.com/writing/theculturalimpactofaigeneratedcontentpart1

Additional Articles about Pedagogical Approaches to AI

For anyone who’s teaching these topics or would love a deeper dive, listed here are a couple of articles I discovered interesting as I used to be researching this.

https://bera-journals.onlinelibrary.wiley.com/doi/full/10.1111/bjet.13337

https://dl.acm.org/doi/abs/10.1145/3408877.3432530 — an early college level curriculum study

https://www.sciencedirect.com/science/article/pii/S2666920X22000169 — a preschool/early elementary level curriculum study

https://dl.acm.org/doi/abs/10.1145/3311890.3311904 — evaluation of SES and national variation in AI learning amongst young children

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x