Home Artificial Intelligence Five things it’s good to know in regards to the EU’s latest AI Act

Five things it’s good to know in regards to the EU’s latest AI Act

0
Five things it’s good to know in regards to the EU’s latest AI Act

The AI Act was conceived as a landmark bill that will mitigate harm in areas where using AI poses the most important risk to fundamental rights, resembling health care, education, border surveillance, and public services, in addition to banning uses that pose an “unacceptable risk.” 

“High risk” AI systems may have to stick to strict rules that require risk-mitigation systems, high-quality data sets, higher documentation, and human oversight, for instance. The overwhelming majority of AI uses, resembling recommender systems and spam filters, will get a free pass. 

The AI Act is a serious deal in that it can introduce vital rules and enforcement mechanisms to a hugely influential sector that’s currently a Wild West. 

Listed here are MIT Technology Review’s key takeaways: 

1. The AI Act ushers in vital, binding rules on transparency and ethics

Tech firms like to discuss how committed they’re to AI ethics. But in terms of concrete measures, the conversation dries up. And anyway, actions speak louder than words. Responsible AI teams are sometimes the primary to see cuts during layoffs, and in reality, tech firms can determine to alter their AI ethics policies at any time. OpenAI, for instance, began off as an “open” AI research lab before closing up public access to its research to guard its competitive advantage, identical to every other AI startup. 

The AI Act will change that. The regulation imposes legally binding rules requiring tech firms to notify people once they are interacting with a chatbot or with biometric categorization or emotion recognition systems. It’ll also require them to label deepfakes and AI-generated content, and design systems in such a way that AI-generated media will be detected. It is a step beyond the voluntary commitments that leading AI firms made to the White House to easily AI provenance tools, resembling watermarking. 

The bill will even require all organizations that provide essential services, resembling insurance and banking, to conduct an impact assessment on how using AI systems will affect people’s fundamental rights. 

2. AI firms still have quite a lot of wiggle room

When the AI Act was first introduced, in 2021, people were still talking in regards to the metaverse. (Are you able to imagine!) 

Fast-forward to now, and in a post-ChatGPT world, lawmakers felt that they had to take so-called foundation models—powerful AI models that will be used for many alternative purposes—into consideration within the regulation. This sparked intense debate over what forms of models ought to be regulated, and whether regulation would kill innovation. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here