America’s AI watchdog is losing its bite

-

Most Americans encounter the Federal Trade Commission only in the event that they’ve been scammed: It handles identity theft, fraud, and stolen data. Through the Biden administration, the agency went after AI firms for scamming customers with deceptive promoting or harming people by selling irresponsible technologies. With yesterday’s announcement of President Trump’s AI Motion Plan, that era may now be over. 

In the ultimate months of the Biden administration under chair Lina Khan, the FTC levied a series of high-profile fines and actions against AI firms for overhyping their technology and bending the reality—or in some cases making claims that were entirely false.

It found that the safety giant Evolv lied concerning the accuracy of its AI-powered security checkpoints, that are utilized in stadiums and schools but did not catch a seven-inch knife that was ultimately used to stab a student. It went after the facial recognition company Intellivision, saying the corporate made unfounded claims that its tools operated without gender or racial bias. It fined startups promising bogus “AI lawyer” services and one which sold fake product reviews generated with AI.

These actions didn’t end in fines that crippled the businesses, but they did stop them from making false statements and offered customers ways to recuperate their money or get out of contracts. In each case, the FTC found, on a regular basis people had been harmed by AI firms that permit their technologies run amok.

The plan released by the Trump administration yesterday suggests it believes these actions went too far. In a piece about removing “red tape and onerous regulation,” the White House says it can review all FTC actions taken under the Biden administration “to make sure that they don’t advance theories of liability that unduly burden AI innovation.” In the identical section, the White House says it can withhold AI-related federal funding from states with “burdensome” regulations.

This move by the Trump administration is the newest in its evolving attack on the agency, which provides a major route of redress for people harmed by AI within the US. It’s prone to end in faster deployment of AI with fewer checks on accuracy, fairness, or consumer harm.

Under Khan, a Biden appointee, the FTC found fans in unexpected places. Progressives called for it to interrupt up monopolistic behavior in Big Tech, but some in Trump’s orbit, including Vice President JD Vance, also supported Khan in her fights against tech elites, albeit for the several goal of ending their supposed censorship of conservative speech. 

But in January, with Khan out and Trump back within the White House, this dynamic all but collapsed. Trump released an executive order in February promising to “rein in” independent agencies just like the FTC that wage influence without consulting the president. The following month, he began taking that vow to—and past—its legal limits.

In March, he fired the one two Democratic commissioners on the FTC. On July 17 a federal court ruled that considered one of those firings, of commissioner Rebecca Slaughter, was illegal given the independence of the agency, which restored Slaughter to her position (the opposite fired commissioner, Alvaro Bedoya, opted to resign relatively than battle the dismissal in court, so his case was dismissed). Slaughter now serves as the only real Democrat.

In naming the FTC in its motion plan, the White House now goes a step further, painting the agency’s actions as a significant obstacle to US victory within the “arms race” to develop higher AI more quickly than China. It guarantees not only to vary the agency’s tack moving forward, but to review and maybe even repeal AI-related sanctions it has imposed up to now 4 years.

How might this play out? Leah Frazier, who worked on the FTC for 17 years before leaving in May and served as an advisor to Khan, says it’s helpful to think concerning the agency’s actions against AI firms as falling into two areas, each with very different levels of support across political lines. 

The primary is about cases of deception, where AI firms mislead consumers. Consider the case of Evolv, or a recent case announced in April where the FTC alleges that an organization called Workado, which offers a tool to detect whether something was written with AI, doesn’t have the evidence to back up its claims. Deception cases enjoyed fairly bipartisan support during her tenure, Frazier says.

“Then there are cases about responsible use of AI, and people didn’t appear to enjoy an excessive amount of popular support,” adds Frazier, who now directs the Digital Justice Initiative on the Lawyers’ Committee for Civil Rights Under Law. These cases don’t allege deception; relatively, they charge that firms have deployed AI in a way that harms people.

Essentially the most serious of those, which resulted in perhaps probably the most significant AI-related motion ever taken by the FTC and was investigated by Frazier, was announced in 2023. The FTC banned Rite Aid from using AI facial recognition in its stores after it found the technology falsely flagged people, particularly women and folks of color, as shoplifters. “Acting on false positive alerts,” the FTC wrote, Rite Aid’s employees “followed consumers around its stores, searched them, ordered them to go away, [and] called the police to confront or remove consumers.”

The FTC found that Rite Aid did not protect people from these mistakes, didn’t monitor or test the technology, and didn’t properly train employees on use it. The corporate was banned from using facial recognition for five years. 

This was a giant deal. This motion went beyond fact-checking the deceptive guarantees made by AI firms to make Rite Aid accountable for how its AI technology harmed consumers. All these responsible-AI cases are those Frazier imagines might disappear in the brand new FTC, particularly in the event that they involve testing AI models for bias.

“There might be fewer, if any, enforcement actions about how firms are deploying AI,” she says. The White House’s broader philosophy toward AI, referred to within the plan, is a “try first” approach that attempts to propel faster AI adoption all over the place from the Pentagon to doctor’s offices. The shortage of FTC enforcement that’s prone to ensue, Frazier says, “is dangerous for the general public.”

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x