Home Artificial Intelligence US tech policy must keep pace with AI innovation

US tech policy must keep pace with AI innovation

2
US tech policy must keep pace with AI innovation

As innovation in artificial intelligence (AI) outpaces news cycles and grabs public attention, a framework for its responsible and ethical development and use has turn into increasingly critical to making sure that this unprecedented technology wave reaches its full potential as a positive contribution to economic and societal progress.

The European Union has already been working to enact laws around responsible AI; I shared my thoughts on those initiatives nearly two years ago. Then, the AI Act, because it is understood, was “an objective and measured approach to innovation and societal considerations.” Today, leaders of technology businesses and the US government are coming together to map out a unified vision for responsible AI.

The facility of generative AI

OpenAI’s release of ChatGPT captured the imagination of technology innovators, business leaders and the general public last 12 months, and consumer interest and understanding of the capabilities of generative AI exploded. Nonetheless, with artificial intelligence becoming mainstream, including as a political issue, and humans’ propensity to experiment and test systems, the power for misinformation, impact on privacy and the chance to cybersecurity and fraudulent behavior run the chance of quickly becoming an afterthought.

In an early effort to deal with these potential challenges and ensure responsible AI innovation that protects Americans’ rights and safety, the White House has announced recent actions to advertise responsible AI.

In a fact sheet released by the White House last week, the Biden-Harris administration outlined three actions to “promote responsible American innovation in artificial intelligence (AI) and protect people’s rights and safety.” These include:

  • Recent investments to power responsible American AI R&D.
  • Public assessments of existing generative AI systems.
  • Policies to make sure the U.S. Government is leading by example in mitigating AI risks and harnessing AI opportunities.

Recent investments

Regarding recent investments, The National Science Foundation’s $140 million in funding to launch seven recent National AI Research Institutes pales as compared to what has been raised by private corporations.

While directionally correct, the U.S. Government’s investment in AI broadly is microscopic in comparison with other countries’ government investments, namely China, which began investments in 2017. A right away opportunity exists to amplify the impact of investment through academic partnerships for workforce development and research. The federal government should fund AI centers alongside academic and company institutions already on the forefront of AI research and development, driving innovation and creating recent opportunities for businesses with the facility of AI.

The collaborations between AI centers and top academic institutions, resembling MIT’s Schwarzman College and Northeastern’s Institute for Experiential AI, help to bridge the gap between theory and practical application by bringing together experts from academic, industry and government to collaborate on cutting-edge research and development projects which have real-world applications. By partnering with major enterprises, these centers can assist corporations higher integrate AI into their operations, improving efficiency, cost savings and higher consumer outcomes.

Moreover, these centers help to teach the following generation of AI experts by providing students with access to state-of-the-art technology, hands-on experience with real-world projects and mentorship from industry leaders. By taking a proactive and collaborative approach to AI, the U.S. government can assist shape a future during which AI enhances, reasonably than replaces, human work. Because of this, all members of society can profit from the opportunities created by this powerful technology.

Public assessments

Model assessment is critical to making sure that AI models are accurate, reliable and bias-free, essential for successful deployment in real-world applications. For instance, imagine an urban planning use case during which generative AI is trained on redlined cities with historically underrepresented poor populations. Unfortunately, it’s just going to steer to more of the identical. The identical goes for bias in lending, as more financial institutions are using AI algorithms to make lending decisions.

If these algorithms are trained on data discriminatory against certain demographic groups, they could unfairly deny loans to those groups, resulting in economic and social disparities. Although these are only a number of examples of bias in AI, this must stay top of mind no matter how quickly recent AI technologies and techniques are being developed and deployed.

To combat bias in AI, the administration has announced a recent opportunity for model assessment on the DEFCON 31 AI Village, a forum for researchers, practitioners and enthusiasts to return together and explore the newest advances in artificial intelligence and machine learning. The model assessment is a collaborative initiative with among the key players within the space, including Anthropic, Google, Hugging Face, Microsoft, Nvidia, OpenAI and Stability AI, leveraging a platform offered by Scale AI.

As well as, it should measure how the models align with the principles and practices outlined within the Biden-Harris administration’s Blueprint for an AI Bill of Rights and the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework. It is a positive development whereby the administration is directly engaging with enterprises and capitalizing on the expertise of technical leaders within the space, which have turn into corporate AI labs.

Government policies

With respect to the third motion regarding policies to make sure the U.S. government is leading by example in mitigating AI risks and harnessing AI opportunities, the Office of Management and Budget is to draft policy guidance on the usage of AI systems by the U.S. Government for public comment. Again, no timeline or details for these policies has been given, but an executive order on racial equity issued earlier this 12 months is anticipated to be on the forefront.

The manager order features a provision directing government agencies to make use of AI and automatic systems in a way that advances equity. For these policies to have a meaningful impact, they need to include incentives and repercussions; they can’t merely be optional guidance. For instance, NIST standards for security are effective requirements for deployment by most governmental bodies. Failure to stick to them is, at minimum, incredibly embarrassing for the individuals involved and grounds for personnel motion in some parts of the federal government. Governmental AI policies, as a part of NIST or otherwise, should be comparable to be effective.

Moreover, the price of adhering to such regulations must not be an obstacle to startup-driven innovation. As an example, what could be achieved in a framework for which cost to regulatory compliance scales with the dimensions of the business? Finally, as the federal government becomes a major buyer of AI platforms and tools, it’s paramount that its policies turn into the guideline for constructing such tools. Make adherence to this guidance a literal, and even effective, requirement for purchase (e.g., The FedRamp security standard), and these policies can move the needle.

As generative AI systems turn into more powerful and widespread, it is crucial for all stakeholders — including founders, operators, investors, technologists, consumers and regulators — to be thoughtful and intentional in pursuing and fascinating with these technologies. While generative AI and AI more broadly have the potential to revolutionize industries and create recent opportunities, it also poses significant challenges, particularly around problems with bias, privacy and ethical considerations.

Due to this fact, all stakeholders must prioritize transparency, accountability and collaboration to be certain that AI is developed and used responsibly and beneficially. This implies investing in ethical AI research and development, engaging with diverse perspectives and communities, and establishing clear guidelines and regulations for developing and deploying these technologies.

2 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here