Home Artificial Intelligence Artificial intelligence regulation bill passed by European Parliament

Artificial intelligence regulation bill passed by European Parliament

1
Artificial intelligence regulation bill passed by European Parliament

European Parliament meeting (photo = shutterstock)

The European Union (EU) artificial intelligence (AI) regulatory bill has passed the parliament. It’s the world’s first AI regulation law.

A lot of foreign media including the Guardian reported on the 14th (local time) that the European Parliament passed the ‘Artificial Intelligence Act (Al Act)’ with 499 votes in favor, 28 against, and 93 abstentions.

The European Parliament, the European Commission (EC) and the EU Council of Ministers will then enter the technique of deriving a final draft through trilateral consultations based on this bill. A number of the contents of the bill may change during this process, however the EU is predicted to return to an agreement on the ultimate draft inside this 12 months.

In fact, even when the ultimate draft is ready, it’s going to take more time for the actual law to take effect. Each of the 27 EU member states must undergo a parliamentary approval process and integrate with their respective national laws.

As well as, a coordination period could also be required if corporations subject to the law request a grace period to adapt to the brand new regulations.

In consequence, the Guardian predicted that the AI ​​Act wouldn’t come into effect until 2026.

The EU is pushing for a short lived agreement in order that tech giants can regulate themselves autonomously until the law takes effect.

AIA classifies AI in accordance with its level of risk and imposes three levels of regulation accordingly.

Step one is ‘very dangerous AI’. This step is extremely concerned about human rights violations, so its use is prohibited. Facial recognition, which monitors the behavior of all residents in real time in public places, and AI technology related to social scoring, which scores national behavior, are applicable. Violation of the ban is subject to fines of as much as 6% of worldwide sales or 30 million euros (roughly 41.6 billion won).

(Photo = shutterstock)
(Photo = shutterstock)

The second step is ‘high-risk AI’. It’s AI applied to systems which have a fantastic impact on safety and human rights. Criminal detection CCTV, personnel management programs equivalent to recruitment and promotion, major infrastructure that may cause great risk if misoperated, equivalent to power, and AI technology applied to loan or subsidy decisions fall under this category. Its use is allowed, but subject to strict criteria equivalent to suitability or bias assessment.

The third is ‘Limited Risk AI’. Transparency is inspired, equivalent to informing users that they’re coping with AI.

The AIA was drafted by the EU Commission in April 2021, and after review by the board, an amendment got here out in December last 12 months. Then, as ‘ChatGPT’ gained sensational popularity, it imposed duties on generated AI equivalent to disclosing training data and preparing measures to forestall abuse. Added

Among the many AIAs, the contents which are more likely to change through the negotiation process between the EU’s agencies are provisions related to the banning of facial recognition technology or the disclosure of generated AI training data. There may be a claim that facial recognition technology ought to be recognized as an exception in national security or crime prevention, and corporations are appealing that it’s technically difficult to reveal generated AI training data.

Since AIA applies to all operators operating within the European market, it is extremely more likely to be accepted as a worldwide standard. The GDPR, which was first enacted in Europe, was accepted by other countries, including america, and has change into a worldwide standard.

Reporter Jeong Byeong-il jbi@aitimes.com

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here