Striking the Balance: Global Approaches to Mitigating AI-Related Risks

-

It’s no secret that for the previous couple of years, modern technologies have been pushing ethical boundaries under existing legal frameworks that weren’t made to suit them, leading to legal and regulatory minefields. To attempt to combat the consequences of this, regulators are selecting to proceed in various alternative ways between countries and regions, increasing global tensions when an agreement can’t be found.

These regulatory differences were highlighted in a recent AI Motion Summit in Paris. The final statement of the event focused on matters of inclusivity and openness in AI development. Interestingly, it only broadly mentioned safety and trustworthiness, without emphasising specific AI-related risks, similar to security threats. Drafted by 60 nations, the UK and US were conspicuously missing from the statement’s signatures, which shows how little consensus there’s at once across key countries.

Tackling AI risks globally

AI development and deployment is regulated otherwise inside each country. Nonetheless, most fit somewhere between the 2 extremes – the US’ and the European Union’s (EU) stances.

The US way: first innovate, then regulate

In the US there are not any federal-level acts regulating AI specifically, as an alternative it relies on market-based solutions and voluntary guidelines. Nevertheless, there are some key pieces of laws for AI, including the National AI Initiative Act, which goals to coordinate federal AI research, the Federal Aviation Administration Reauthorisation Act and the National Institute of Standards and Technology’s (NIST) voluntary risk management framework.

The US regulatory landscape stays fluid and subject to big political shifts. For instance, in October 2023, President Biden issued an Executive Order on Protected, Secure and Trustworthy Artificial Intelligence, putting in standards for critical infrastructure, enhancing AI-driven cybersecurity and regulating federally funded AI projects. Nevertheless, in January 2025, President Trump revoked this executive order, in a pivot away from regulation and towards prioritising innovation.

The US approach has its critics. They note that its “fragmented nature” results in a posh web of rules that “lack enforceable standards,” and has “gaps in privacy protection.” Nevertheless, the stance as an entire is in flux – in 2024, state legislators introduced almost 700 pieces of latest AI laws and there have been multiple hearings on AI in governance in addition to, AI and mental property. Even though it’s apparent that the US government doesn’t shrink back from regulation, it’s clearly in search of ways of implementing it without having to compromise innovation.

The EU way: prioritising prevention

The EU has chosen a distinct approach. In August 2024, the European Parliament and Council introduced the Artificial Intelligence Act (AI Act), which has been widely considered probably the most comprehensive piece of AI regulation thus far. By employing a risk-based approach, the act imposes strict rules on high-sensitivity AI systems, e.g., those utilized in healthcare and demanding infrastructure. Low-risk applications face only minimal oversight, while in some applications, similar to government-run social scoring systems are completely forbidden.

Within the EU, compliance is mandatory not only inside its borders but in addition from any provider, distributor, or user of AI systems operating within the EU, or offering AI solutions to its market – even when the system has been developed outside. It’s likely that it will pose challenges for US and other non-EU providers of integrated products as they work to adapt.

Criticisms of the EU’s approach include its alleged failure to set a gold standard for human rights. Excessive complexity has also been noted together with an absence of clarity. Critics are concerned in regards to the EU’s highly exacting technical requirements, because they arrive at a time when the EU is in search of to bolster its competitiveness.

Finding the regulatory middle ground

Meanwhile, the UK has adopted a “lightweight” framework that sits somewhere between the EU and the US, and relies on core values similar to safety, fairness and transparency. Existing regulators, just like the Information Commissioner’s Office, hold the facility to implement these principles inside their respective domains.

The UK government has published an AI Opportunities Motion Plan, outlining measures to take a position in AI foundations, implement cross-economy adoption of AI and foster “homegrown” AI systems. In November 2023, the UK founded the AI Safety Institute (AISI), evolving from the Frontier AI Taskforce. AISI was created to judge the protection of advanced AI models, collaborating with major developers to realize this through safety tests.

Nevertheless, criticisms of the UK’s approach to AI regulation include limited enforcement capabilities and a lack of coordination between sectoral laws. Critics have also noticed an absence of a central regulatory authority.

Just like the UK, other major countries have also found their very own place somewhere on the US-EU spectrum. For instance, Canada has introduced a risk-based approach with the proposed AI and Data Act (AIDA), which is designed to strike a balance between innovation, safety and ethical considerations. Japan has adopted a “human-centric” approach to AI by publishing guidelines that promote trustworthy development. Meanwhile in China, AI regulation is tightly controlled by the state, with recent laws requiring generative AI models undergo security assessments and align with socialist values. Similarly to the UK, Australia has released an AI ethics framework and is looking into updating its privacy laws to deal with emerging challenges posed by AI innovation.

Learn how to establish international cooperation?

As AI technology continues to evolve, the differences between regulatory approaches have gotten increasingly more apparent. Each individual approach taken regarding data privacy, copyright protection and other facets, make a coherent global consensus on key AI-related risks harder to achieve. In these circumstances, international cooperation is crucial to ascertain baseline standards that address key risks without curtailing innovation.

The reply to international cooperation could lie with global organisations just like the Organisation for Economic Cooperation and Development (OECD), the United Nations and a number of other others, that are currently working to ascertain international standards and ethical guidelines for AI. The trail forward won’t be easy because it requires everyone within the industry to seek out common ground. If we consider that innovation is moving at light speed – the time to debate and agree is now.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x