Home Artificial Intelligence EU artificial intelligence law issues face recognition and generative AI technology regulation

EU artificial intelligence law issues face recognition and generative AI technology regulation

4
EU artificial intelligence law issues face recognition and generative AI technology regulation

(Photo = shutterstock)

The most important issue in the substitute intelligence (AI) regulatory bill passed by the European Union (EU) Parliament is the ban on facial recognition and technical regulations surrounding the disclosure of generated AI training data.

It’s a significant task within the trilateral negotiations between the EU Commission, the Parliament and the Council of Ministers, and the potential for amending the provisions through the negotiation process is being raised.

Facial Recognition Technology Regulation

Facial recognition technology is a technology that analyzes an image of an individual’s face to substantiate identity. The EU Artificial Intelligence Act prohibits taking and analyzing the face of an strange person without consent as ‘very dangerous AI’ that violates human rights.

In the middle of the European Parliament’s review, there was also a proposal to permit law enforcement agencies to make use of it as an exception, but a bill containing a complete ban was passed.

Nonetheless, members of the European People’s Party (EPP), a center-right party within the European Parliament, are still arguing that it may very well be very useful for national security, crime prevention, counterterrorism, and the seek for missing individuals.

The EU Council of Ministers, which consists of ministers from EU member states, can be expected to strongly insist that exceptions to the ban be recognized in law enforcement agencies and border guards in the middle of future trilateral negotiations.

In truth, several foreign media outlets, including the Times and the Guardian, are reporting that European leaders are expected to withdraw the total ban on facial recognition technology in the longer term AIA final draft negotiations.

(Photo = shutterstock)
(Photo = shutterstock)

Regulate generative AI

Relating to generative AI, provisions mandating the disclosure of coaching data to make sure transparency have gotten a hot potato. Coordination in some form seems unavoidable as the businesses subject to the appliance are expressing their disapproval, saying that it isn’t technically possible.

Section 28b(4(c)) of the AIA requires that providers of generative AI systems “must document and publicly provide a sufficiently detailed summary of the usage of educational data protected under copyright law”.

Nonetheless, generative AI development corporations claim that it’s unattainable to trace the copyright of certain data in massive training datasets. For instance, ‘GPT-3’, a language model of OpenAI, was trained with 45 terabytes of text data, and ‘Stable Diffusion’, a picture creation tool, was trained with 5.8 billion images, so it’s unattainable to search out copyrights one after the other. .

Sam Altman, CEO of OpenAI, said in a discussion held at University College London (UCL) on the twenty fourth of last month, “We are going to attempt to comply with the necessities of the EU AI law, nevertheless it is unclear whether we are able to solve all of them because of technical limitations.” If we cannot follow the law, we are going to stop operations (inside the EU),” he said.

Particularly, he said, “I heard that the draft AI Act is currently over-regulated and might be revised later.” When EU Commissioner Thiers Bruton refuted this, saying, “The AI ​​bill isn’t for negotiation,” Altman modified his stance, saying, “In fact, I haven’t any intention of leaving Europe.”

Creation corporations also claim that copyright laws in Europe and the USA are different, which might result in confusion and litigation.

The AIA also requires corporations offering generative AI services to have safeguards in place to stop their technology from being misused. On this regard, the businesses are complaining that it’s difficult to arrange appropriate devices in a situation where it’s technically difficult to mark AI-generated content. Accordingly, the potential for adjusting the regulations on generative AI is raised.

Reporter Jeong Byeong-il jbi@aitimes.com

4 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here