Why the world is seeking to ditch US AI models

-

Consequently, some policymakers and business leaders—in Europe, specifically—are reconsidering their reliance on US-based tech and asking whether or not they can quickly spin up higher, homegrown alternatives. This is especially true for AI.

Considered one of the clearest examples of that is in social media. Yasmin Curzi, a Brazilian law professor who researches domestic tech policy, put it to me this fashion: “Since Trump’s second administration, we cannot count on [American social media platforms] to do even the bare minimum anymore.” 

Social media content moderation systems—which already use automation and are also experimenting with deploying large language models to flag problematic posts—are failing to detect gender-based violence in places as varied as India, South Africa, and Brazil. If platforms begin to rely much more on LLMs for content moderation, this problem will likely worsen, says Marlena Wisniak, a human rights lawyer who focuses on AI governance on the European Center for Not-for-Profit Law. “The LLMs are moderated poorly, and the poorly moderated LLMs are then also used to moderate other content,” she tells me. “It’s so circular, and the errors just keep repeating and amplifying.” 

A part of the issue is that the systems are trained totally on data from the English-speaking world (and American English at that), and consequently, they perform less well with local languages and context. 

Even multilingual language models, which are supposed to process multiple languages without delay, still perform poorly with non-Western languages. As an illustration, one evaluation of ChatGPT’s response to health-care queries found that results were far worse in Chinese and Hindi, that are less well represented in North American data sets, than in English and Spanish.   

For a lot of at RightsCon, this validates their calls for more community-driven approaches to AI—each out and in of the social media context. These could include small language models, chatbots, and data sets designed for particular uses and specific to particular languages and cultural contexts. These systems could possibly be trained to acknowledge slang usages and slurs, interpret words or phrases written in a mixture of languages and even alphabets, and discover “reclaimed language” (onetime slurs that the targeted group has decided to embrace). All of those are likely to be missed or miscategorized by language models and automatic systems trained totally on Anglo-American English. The founding father of the startup Shhor AI, for instance, hosted a panel at RightsCon and talked about its latest content moderation API focused on Indian vernacular languages.

Many similar solutions have been in development for years—and we’ve covered a lot of them, including a Mozilla-facilitated volunteer-led effort to gather training data in languages aside from English, and promising startups like Lelapa AI, which is constructing AI for African languages. Earlier this 12 months, we even included small language models on our 2025 list of top 10 breakthrough technologies. 

Still, this moment feels a bit of different. The second Trump administration, which shapes the actions and policies of American tech corporations, is clearly a significant component. But there are others at play. 

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x