Jon Potter, Partner at The RXN Group – Interview Series

-

Jon Potter is a Partner and leads the State-Level AI Practice at RXN Group. He’s an experienced lawyer, lobbyist, and communicator, has founded and led two industry associations and a consumer organization, and consulted many industries and organizations on legislative, communications and issue advocacy challenges. Jon founded the Digital Media Association, Fan Freedom, and the Application Developers Alliance, was Executive Vice President of the worldwide communications firm Burson-Marsteller, and a lawyer for several years with the firm of Weil, Gotshal, & Manges.

As each a client and consultant, Jon has overseen federal, multistate and international advocacy campaigns and engaged lobbyists, communications firms, and law firms on three continents. Jon has testified in Congress and state legislatures several times, has spoken at dozens of conferences throughout the U.S. and internationally, and has been interviewed on national and native radio and tv news programs, including CNN, Today Show, and 60 Minutes.

Are you able to provide an outline of the important thing trends in AI laws across states in 2024?

2024 has been a unprecedented 12 months for state-level AI laws, marked by several trends.

The primary trend is volume. 445 AI bills were introduced across 40 states, and we expect this may proceed in 2025.

A second trend is a consistent dichotomy—bills about government use of AI were generally optimistic, while bills about AI generally and personal sector use of AI were skeptical and fearful. Moreover, several states passed bills creating AI “task forces,” which at the moment are meeting.

What are the major concerns driving state legislators to introduce AI bills, and the way do these concerns vary from state to state?

Many legislators want government agencies to enhance with AI – to deliver higher services more efficiently.

Amongst skeptics, topics of concern include fraudulent and abusive “deepfakes” related to elections, creative arts, and bullying; algorithmic discrimination; fear of AI-influenced “life critical” decisions and decision processes; personal privacy and private data use; and job displacement. Some concerns could be addressed in very specific laws, corresponding to Tennessee’s ELVIS Act and California’s political deepfakes prohibition. Other concerns, corresponding to risks of algorithmic discrimination and job displacement, are amorphous and so the legislative proposals are broad, non-specific, and of great concern.

Some lawmakers imagine that today’s social media and digital privacy challenges might have been mitigated by prophylactic laws, so that they are rushing to pass laws to resolve AI problems they fear will develop. After all, it’s very hard to define clear compliance guidance before actual problems emerge.

How can states craft AI laws that encourages innovation while also addressing potential risks?

Laws can do each when it regulates specific use cases and risks but not foundational multipurpose technology. A very good example of that is the federal laws that govern uses of health, financial, and student education data, but don’t regulate computers, servers, or cloud computing. By not regulating multipurpose tools corresponding to data storage and data processing technologies (including AI), the laws address real risks and define clear compliance rules.

It’s vital that legislators hear from a wide selection of stakeholders before passing recent laws. Headlines suggest that AI is dominated by giant corporations investing billions of dollars to construct extraordinarily powerful and risk models. But there are literally thousands of small and native corporations using AI to construct recycling, workplace bias, small business lending, and cybersecurity solutions. Legal Aid organizations and native nonprofits are using AI to assist underserved communities. Lawmakers have to be confident that AI-skeptical laws doesn’t shut down small, local, and public-benefit AI activity.

Out of your experience, what are essentially the most significant impacts that recent AI bills have had on businesses? Are there specific industries which have been more affected than others?

We don’t yet know the impacts of recent AI bills because only a few recent bills have change into law and the brand new laws are usually not yet making a difference. The broadest law, in Colorado, doesn’t take effect until 2026 and most stakeholders, including the Governor and sponsor, anticipate significant amendments before the effective date. The ELVIS Act in Tennessee and the California deepfakes laws should reduce fraudulent and criminal activity, and hopefully won’t inhibit parody or other protected speech.

With so many states stepping as much as legislate AI, how do you see the connection between state-level AI regulations and potential federal motion evolving?

It is a moving goal with many variables. There are already several areas of law where the federal and state governments co-exist, and AI is related to a lot of them. For instance, there are federal and state workplace and financial services discrimination laws which might be effective no matter whether AI is utilized by alleged bad actors. One query for legislators is why recent laws or regulations are needed solely because AI is utilized in an activity.

What are the common pitfalls or challenges that state legislators face when drafting AI-related bills? How can these be avoided?

It’s all about education – taking the time to seek the advice of many stakeholders and understand how AI works in the true world. Laws based on fear of the unknown won’t ever be balanced, and can at all times inhibit innovation and AI for good. Other countries will fill the innovation void if the USA cedes our leadership attributable to fear.

Are you able to share examples of successful advocacy that influenced AI laws in favor of innovation?

In Colorado, the Rocky Mountain AI Interest Group and AI Salon rallied developers and startups to have interaction with legislators for the primary time. Without lobbyists or insider consultants, these groups tapped right into a wellspring of smart, unhappy, and motivated founders who expressed their displeasure in crisp, effective, testimony and the media – and were heard.

Similarly in California, founders of small AI-forward corporations testified passionately within the legislature and connected with the media to precise urgent concern and disappointment about well-intended but terribly overbroad laws. LIke their counterparts in Colorado, these founders were non-traditional, highly motivated, and really effective.

How do you discover and interact stakeholders effectively in state-level AI policy battles?

Partly by talking to people such as you, and spreading the word that laws has or will soon be introduced and should impact someone’s livelihood, business, or opportunity. Legislators only know what they know, and what they learn by talking to lobbyists and corporations they’re aware of. It’s vital to have interaction in the method when bills are drafted and amended, because . Every company and organization that’s constructing or using AI should participate before the laws and regulations are written, because after they’re written it’s regularly too late.

What are essentially the most effective ways to speak the complexities of AI to state legislators?

Advocacy exists in lots of forms. Whether it’s meeting with legislators in-person or by video, sending a letter or email, or speaking with the media – each of those is a solution to make your voice heard. What’s most significant is clarity and ease, and telling your personal story which is what you recognize best. Different state legislatures have different rules, processes, and norms, but just about all legislators are wanting to learn and need to listen to from constituents.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x