Home Artificial Intelligence Tech Leaders Highlighting the Risks of AI & the Urgency of Robust AI Regulation

Tech Leaders Highlighting the Risks of AI & the Urgency of Robust AI Regulation

1
Tech Leaders Highlighting the Risks of AI & the Urgency of Robust AI Regulation

AI growth and advancements have been exponential over the past few years. Statista reports that by 2024, the worldwide AI market will generate a staggering revenue of around $3000 billion, in comparison with $126 billion in 2015. Nevertheless, tech leaders at the moment are warning us in regards to the various risks of AI.

Especially, the recent wave of generative AI models like ChatGPT has introduced recent capabilities in various data-sensitive sectors, corresponding to healthcare, education, finance, etc. These AI-backed developments are vulnerable attributable to many AI shortcomings that malicious agents can expose.

Let’s discuss what AI experts are saying in regards to the recent developments and highlight the potential risks of AI. We’ll also briefly touch on how these risks may be managed.

Tech Leaders & Their Concerns Related to the Risks of AI

Geoffrey Hinton

Geoffrey Hinton – a famous AI tech leader (and godfather of this field), who recently quit Google, has voiced his concerns about rapid development in AI and its potential dangers. Hinton believes that AI chatbots can change into “quite scary” in the event that they surpass human intelligence.

Hinton says:

“Immediately, what we’re seeing is things like GPT-4 eclipses an individual in the quantity of general knowledge it has, and it eclipses them by a good distance. By way of reasoning, it isn’t pretty much as good, nevertheless it does already do easy reasoning. And given the speed of progress, we expect things to recuperate quite fast. So we’d like to fret about that.”

Furthermore, he believes that “bad actors” can use AI for “bad things,” corresponding to allowing robots to have their sub-goals. Despite his concerns, Hinton believes that AI can bring short-term advantages, but we should always also heavily spend money on AI safety and control.

Elon Musk

Elon Musk’s involvement in AI began together with his early investment in DeepMind in 2010, to co-founding OpenAI and incorporating AI into Tesla’s autonomous vehicles.

Although he’s keen about AI, he ceaselessly raises concerns in regards to the risks of AI. Musk says that powerful AI systems may be more dangerous to civilization than nuclear weapons. In an interview at Fox News in April 2023, he said:

“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad automotive production. Within the sense that it has the potential — nonetheless, small one may regard that probability — nevertheless it is non-trivial and has the potential of civilization destruction.”

Furthermore, Musk supports government regulations on AI to make sure safety from potential risks, although “it’s not so fun.”

Pause Giant AI Experiments: An Open Letter Backed by 1000s of AI Experts

Way forward for Life Institute published an open letter on twenty second March 2023. The letter calls for a short lived six months halt on AI systems development more advanced than GPT-4. The authors express their concerns in regards to the pace with which AI systems are being developed poses severe socioeconomic challenges.

Furthermore, the letter states that AI developers should work with policymakers to document AI governance systems. As of June 2023, the letter has been signed by greater than 31,000 AI developers, experts, and tech leaders. Notable signatories include Elon Musk, Steve Wozniak (Co-founder of Apple), Emad Mostaque (CEO, Stability AI), Yoshua Bengio (Turing Prize winner), and lots of more.

Counter Arguments on Halting AI Development

Two distinguished AI leaders, Andrew Ng, and Yann LeCun, have opposed the six-month ban on developing advanced AI systems and regarded the pause a foul idea.

Ng says that although AI has some risks, corresponding to bias, the concentration of power, etc. But the worth created by AI in fields corresponding to education, healthcare, and responsive coaching is tremendous.

Yann LeCun says that research and development shouldn’t be stopped, although the AI products that reach the end-user may be regulated.

What Are the Potential Dangers & Immediate Risks of AI?

1. Job Displacement

AI experts imagine that intelligent AI systems can replace cognitive and artistic tasks. Investment bank Goldman Sachs estimates that around 300 million jobs might be automated by generative AI.

Hence, there ought to be regulations on the event of AI in order that it doesn’t cause a severe economic downturn. There ought to be educational programs for upskilling and reskilling employees to take care of this challenge.

2. Biased AI Systems

Biases prevalent amongst human beings about gender, race, or color can inadvertently permeate the information used for training AI systems, subsequently making AI systems biased.

As an illustration, within the context of job recruitment, a biased AI system can discard resumes of people from specific ethnic backgrounds, creating discrimination within the job market. In law enforcement, biased predictive policing could disproportionately goal specific neighborhoods or demographic groups.

Hence, it is important to have a comprehensive data strategy that addresses AI risks, particularly bias. AI systems have to be ceaselessly evaluated and audited to maintain them fair.

3. Safety-Critical AI Applications

Autonomous vehicles, medical diagnosis & treatment, aviation systems, nuclear power plant control, etc., are all examples of safety-critical AI applications. These AI systems ought to be developed cautiously because even minor errors could have severe consequences for human life or the environment.

As an illustration, the malfunctioning of the AI software called Maneuvering Characteristics Augmentation System (MCAS) is attributed partly to the crash of the 2 Boeing 737 MAX, first in October 2018 after which in March 2019. Sadly, the 2 crashes killed 346 people.

How Can We Overcome the Risks of AI Systems? – Responsible AI Development & Regulatory Compliance

Responsible AI Development & Regulatory Compliance

Responsible AI (RAI) means developing and deploying fair, accountable, transparent, and secure AI systems that ensure privacy and follow legal regulations and societal norms. Implementing RAI may be complex given AI systems’ broad and rapid development.

Nevertheless, big tech corporations have developed RAI frameworks, corresponding to:

  1. Microsoft’s Responsible AI
  2. Google’s AI Principles
  3. IBM’S Trusted AI

AI labs across the globe can take inspiration from these principles or develop their very own responsible AI frameworks to make trustworthy AI systems.

AI Regulatory Compliance

Since, data is an integral component of AI systems, AI-based organizations and labs must comply with the next regulations to make sure data security, privacy, and safety.

  1. GDPR (General Data Protection Regulation) – a knowledge protection framework by the EU.
  2. CCPA (California Consumer Privacy Act) – a California state statute for privacy rights and consumer protection.
  3. HIPAA (Health Insurance Portability and Accountability Act) – a U.S. laws that safeguards patients’ medical data.   
  4. EU AI Act, and Ethics guidelines for trustworthy AI – a European Commission AI regulation.

There are numerous regional and native laws enacted by different countries to guard their residents. Organizations that fail to make sure regulatory compliance around data may end up in severe penalties. As an illustration, GDPR has set a wonderful of €20 million or 4% of annual profit for serious infringements corresponding to illegal data processing, unproven data consent, violation of information subjects’ rights, or non-protected data transfer to a global entity.

AI Development & Regulations – Present & Future

With every passing month, AI advancements are reaching unprecedented heights. But, the accompanying AI regulations and governance frameworks are lagging. They should be more robust and specific.

Tech leaders and AI developers have been ringing alarms in regards to the risks of AI if not adequately regulated. Research and development in AI can further bring value in lots of sectors, nevertheless it’s clear that careful regulation is now imperative.

For more AI-related content, visit unite.ai.

1 COMMENT

  1. I blog frequently and I genuinely thank you for your information.
    The article has really peaked my interest. I am going to take a note of your website
    and keep checking for new information about once a week.
    I opted in for your Feed too.

    Also visit my web page :: web Page

LEAVE A REPLY

Please enter your comment!
Please enter your name here