Generative AI, Discriminative Human

-

Doom scrolling through social media to read AI news today is sort of a Rorschach Inkblot test: whatever you might be on the lookout for, one can find.

When you think AI is a massive waste of cash, one can find that angle well covered. When you are vested within the industry and anxious about whether AI is a bubble there are numerous breathless takes on the subject. When you are on the lookout for evidence that AI will end the world, ‘breaking news’ that may affirm that standpoint abounds. 

Amid the recursive, agentic AI-generated summaries of hallucination-ridden summarized slop, I had the nice fortune to speak with some nice folk from Praxis who were doing great work with students in regards to the urgent need for critical pondering skills.

That chat inspired this post. 

The next is a synthesis of the highest 10 things I’d share with someone wanting to critically think through how AI is impacting our world. 

1. Generative AI with discriminative humans is the brand new state of the world.

Outside of information science and AI groups, it might surprise some that until recently, most machine learning models were “discriminative” in nature, doing things akin to anomaly detection, data evaluation, and classification (with famous examples of AI models within the early 2010s focused on differentiating between cats and dogs). 

Data analysts and data scientists then used those outputs to driving narratives (a skill often called ‘data storytelling’), delivered through attractive reports and presentations.

Today, this dynamic has reversed — generative AI can produce those polished reports and presentations, but humans have to bring critical pondering — shaping the direction of content generation, discerning quality and providing true context (beyond the clever misnomer of the ‘context window’ in AI apps).

Put simply: 

The necessities of a superb piece of labor haven’t modified — however the roles are reversing. 

When you take nothing else away from this post, take away this — we’re moving away from work characterised by and to a world of , that needs .

2. Think critically about what kinds of AI to make use of, and whether to make use of AI in any respect.

Before we go too far, it’s price acknowledging that ‘AI’ is usually an unhelpful term. While it’s a well-established field of educational study, at present it’s getting used so loosely that it’s becoming unmoored from its fundamentals. 

Practically, AI encompasses an enormous array of methods and technologies, and using AI as an umbrella term muddies the discussion and provides fertile ground for misunderstandings, trading nuanced and grounded discussions on the strengths and limits of various AI approaches for hype, name-dropping and unnecessary obfuscation. 

For example, classical machine learning techniques are highly efficient on small datasets, statistical methods are the precise tool when you find yourself concerned about relationships between features, and symbolic methods which explicitly represent problems and knowledge solve for explainability. Each of those sub-branches of AI represents a sturdy and well-developed toolkit that solves problems that plague current large language models.

In that sense, AI is less like a hammer to throw at every problem, but more like a toolbox with quite a lot of tools, and applying the precise kind of AI to the precise problem goes a great distance in removing its mystiques and risks. Pushing for specific language the following time you hear ‘AI’ will bring you clarity.

I’ll outwardly smile but die slightly inside if you happen to ever use a big language model as a calculator.

3. Think critically about designing AI systems to help you, or you might find them controlling you.

A well-cited paper characterizes two ways different people effectively interact with AI by likening them to ‘cyborgs’ and ‘centaurs’. Centaurs create a transparent division of labour and treat AI as tools, while cyborgs integrate AI deeply into their thought and work processes in additional flexible and dynamic ways. 

Each are valid patterns of human-AI teaming, but what’s most dangerous and insidious is the ‘reverse centaur’, coined by Cory Doctorow, where AI systems lead and direct, and . An example is his description of delivery staff on the mercy of AI systems that optimize outcomes for the corporate by monitoring them to the nth degree, all the way down to video cameras in vehicles tracking the movement of their eyeballs.

A related point on the ‘dark patterns’ of AI that proceed to spread at pace is the belief that the goals that an AI system has are sometimes the goals of AI , not AI . Suggestion engines that power social media feeds to maximise engagement are a major example, essentially arraying a force of engineers, psychologists, and designers to focus their talent against you to fuel promoting revenue machines. With addiction, misinformation, and other second-order ills an inconvenient but largely ignored fact.

This is especially insidious as corporations also can hide behind the narrative that ‘we’re only giving customers what they need’. But on this case, corporations are preying on our baser ‘system 1’ lizard brains (often effectively hijacking our minds by design), versus serving the higher intentions of deliberative ‘system 2’ brains. 

Actively design AI systems such that they serve your best self.

4. Think critically about how Generative AI blurs out uniqueness and how one can preserve your unique self.

A recent study showed one in all the unintended consequences of huge numbers of individuals using generative AI to provide content is that online content is increasingly looking the identical. And this persists despite variations in systems, prompts and usage.  

The identical study also suggests people prefer content generative AI — while the study found that not using AI results in fewer posts online, content posted without generative AI has more positive engagement. That is unsurprising, and encapsulated well with the quote:

“Why would I hassle to read something someone couldn’t be bothered to write?” 

BBC Feature

This means that each for maximising your external impact and for developing your internal identity, there has never been a more necessary time to seek out and stay true to your individual voice.

5. Think critically about how using Large Language Models affect our brains and mental fitness.

A study by the MIT Media Lab compared brain activity on a task between people using 1) just their brains, 2) engines like google, and three) large language models, and their results present robust evidence that our brains work otherwise when assisted by technology.

The Brain‑only group exhibited the strongest, widest‑ranging brain activity; the Search Engine group showed intermediate engagement, and the LLM-assisted group elicited the weakest overall brain response.

Moreover, LLM users had less ownership and had trouble quoting their very own work. And over time, the LLM users “consistently underperformed at neural, linguistic, and behavioural levels”.

As we elect to make use of AI to assist us with cognitive tasks, we lose our connection to the duty and the advantages of completing the duty ourselves, with long-term implications.

Just as moving away from manual work towards sedentary lifestyles introduces risks to our physical health, necessitating recommendations for deliberate physical activity to compensate, LLMs are already quietly endangering our mental fitness.

6. Think critically about how AI is impacting our worldview.

The previous point brings us to how we predict in regards to the impacts of AI.  Much discussion centres across the impact that AI affects our work and threatens to automate away our jobs, but that is simply a component of the story.

Firstly, simply because a task is ‘exposed to AI’ doesn’t mean it needs to be automated, and jobs are greater than a group of tasks. There are relationships, accountability and ethical judgement, not to say human presence.

One irony of agentic AI is how little we discuss how have agency to design where are how we implement AI and point it in the precise direction.

A more useful strategy to think through the effect of AI on any given area is thru the ‘4 Ws’ — Workbench, Work, Employees, Worldview. Workbench is the tool or technology that’s getting used for work. Work is in regards to the tasks and activities being performed and the structures that support them. Employees seek advice from the people doing the work and other stakeholders, and Worldview is in regards to the unspoken assumptions and the best way things work in a site.

To take an example from education, where there are discussions on students using ChatGPT and similar AI systems for his or her homework and exams. There’s lots of hand-wringing on how latest generative AI tools like ChatGPT (workbench) are used to do assignments (work). But fairly than fixating on how one can detect use of generative AI in isolation, a greater approach can be to take into consideration how students (staff) are changing by way of them learning less of the material while picking up AI literacy, and the way the education system must adapt (worldview) to the brand new reality.

7. Think critically in regards to the AI stories being told and search for the missing stories.

There’s an enormous amount of cash at stake to the tune of over a trillion dollars for most of the world’s largest AI corporations. This creates immense pressure for these corporations to speed up their flavour of AI adoption, and this drives AI ‘hype’ through marketing spend, high-profile media interviews, and PR machines that may spin facts in self-serving ways. Most recently, news broke of AI corporations paying influencers $400,000-$600,000 to post about AI.

It can be crucial to grasp that most of the stories we’re being told about AI overwhelmingly represent the views of individuals AI, fairly than people it.

This has been called the AI story crisis, where the dominant narratives that shape the general public discourse on AI are shaped by a skewed sample of storytellers, which can distract and mislead public understanding and conceptions about AI.

I’d go further to indicate that narratives shape greater than ‘the general public’, but extends into governments and corporations, which raises the stakes.

AI cannot do your job, but an AI salesman can 100% persuade your boss to fireplace you and replace you with an AI that may’t do your job. 

— Cory Doctorow

On this environment, be discerning and look beyond to . Think through who’s behind each AI story, and what drives them: is what you might be reading coming from someone’s authentic opinion, or from someone being incentivised to border the story a certain way? Query the framing of the story, and think in regards to the stakeholders whose voices will not be being heard.

And so far as authentic opinions go – the most effective ways to envision the stories… is to experience AI for yourself first hand.

8. Think critically in regards to the supply chain behind the AI industry. 

As an information scientist, three necessary inputs to an AI model are a model’s training data, the labour used to annotate and process it, and the compute utilized in model training and usage (often called ‘inference’). Unfortunately, a big a part of generative AI is built on a supply chain where each of those three components is much from ideal.

Karen Hao’s well-written book Empire of AI does a greater job than I can in spelling out the dysfunctions. But in short:

  • Data used for the training of huge language models is currently the topic of multiple lawsuits where AI corporations are accused of illegally copying thousands and thousands of articles to coach AI models. 
  • Environmental issues abound with the present generation of AI models. Training activities are highly energy-intensive, and so is the energy utilized in running user queries (often called ‘inference’). Disclosure is usually problematically sketchy, but points to a hefty climate footprint, with costs potentially being passed on to consumers.
  • Labour within the AI industry may call to mind well-paid data scientists and software engineers in slick city offices with free lunches, but in point of fact, large language models are also powered by large offshore workforces whose work involves flagging, annotating, and processing disturbing content, including toxic and harmful content, graphic violence, and worse. Much of this activity occurs at low-cost countries in exploitative conditions at an important cost to mental health. 

There are higher ways to create AI systems, and we should always resist this from becoming the norm.

9. Think critically about adoption time horizons to parse the true impact of AI.

Coming full circle to our doom-scrolling, one lens suggests that the world is changing overnight, with the key providers announcing a mean of two model releases a month in 2025.

Nevertheless, the discharge of a brand new model is a far cry from changing the world. I find it useful to tell apart between (a brand new model breakthrough and its release), (the said model being implemented in a usable product), and, most significantly, (when it slowly spreads through organisations and households over time). 

Taking the narrative of AI replacing jobs for example, jobs are excess of the sum of their tasks, with deep context, accountability, and relationships. As well as, while latest foundation models are performing well in difficult exams akin to in finance and medicine, there are significant lags between the invention of those models and their being broadly diffused into organisations and society.

Normally, my experience within the context of huge corporations suggests that while invention could also be measured in days because the knowledge sweeps through the organisation, the adoption of models into AI systems and products tends to take weeks, and diffusion is a much slower process that may stretch into years as habits form, work processes slowly reconfigures and technology slowly grinds through a number of individual, cultural and organisational barriers.

AI has been in comparison with tractors in its ability to displace staff in an identical way that tractors eventually displaced using horses for agriculture. With the advantage of hindsight, it’s instructive that tractors took a full generation to overtake horses. And while there are arguments that within the digital world things move more quickly, it is probably going that true diffusion will take years.

10. You’ll be able to make a difference in the best way we experience AI.

And within the meantime, despite popular narratives sounding like AI is something that happens to us in an inevitable way, the best way we experience ‘AI’ will not be like a train on rails with humanity tied to the train track and awaiting the proverbial train wreck. 

It’s more useful to think about AI just like the early days of contemporary transportation itself. On one hand, we have now a way that it’s a fundamental system that may shape our lives far into the longer term. But alternatively, it’s sobering to notice that while the primary modern automobile was invented around 1885, automobile door keys only got here in 1908, the 3-point seat belt was only invented in 1958, and international road signs only became standardised in 1968.

This time gap between the initial adoption of contemporary cars and having the effective and widespread rules of the road is where we’re at today for AI. 

We now have work to do — cars and their engines (AI applications and their models) have to be tested, automobile locks (AI safety features) have to be installed, drivers need seatbelts and driving licenses (users need AI safety and accreditation), and road signs (AI regulations) have to be harmonized.

The longer term is one you could steer today.


References:

Randazzo, Steven, Hila Lifshitz, Katherine C. Kellogg, Fabrizio Dell’Acqua, Ethan Mollick, François Candelon, and Karim R. Lakhani. “Cyborgs, Centaurs and Self-Automators: The Three Modes of Human-GenAI Knowledge Work and Their Implications for Skilling and the Way forward for Expertise.” Harvard Business School Working Paper, №26–036, December 2025.

Liu, Chaoran and Wang, Tong and Yang, S. Alex, Generative AI and Content Homogenization: The Case of Digital Marketing (July 26, 2025). Available at SSRN: https://ssrn.com/abstract=5367123 or http://dx.doi.org/10.2139/ssrn.5367123

Patel, Jaisal & Chen, Yunzhe & He, Kaiwen & Wang, Keyi & Li, David & Xiao, Kairong & Liu, Xiao-Yang. (2025). Reasoning Models Ace the CFA Exams. 10.48550/arXiv.2512.08270.

Kasagga A, Sapkota A, Changaramkumarath G, Abucha JM, Wollel MM, Somannagari N, Husami MY, Hailu KT, Kasagga E. Performance of ChatGPT and Large Language Models on Medical Licensing Exams Worldwide: A Systematic Review and Network Meta-Evaluation With Meta-Regression. Cureus. 2025 Oct 10;17(10):e94300. doi: 10.7759/cureus.94300. PMID: 41230320; PMCID: PMC12603599.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x