Last Thursday, Senators Elizabeth Warren and Eric Schmitt introduced a bill geared toward stirring up more competition for Pentagon contracts awarded in AI and cloud computing. Amazon, Microsoft, Google, and Oracle currently dominate those contracts. “The way in which that the large get larger in AI is by sucking up everyone else’s data and using it to coach and expand their very own systems,” Warren told the .
The brand new bill would “require a competitive award process” for contracts, which might ban using “no-bid” awards by the Pentagon to corporations for cloud services or AI foundation models. (The lawmakers’ move got here a day after OpenAI announced that its technology can be deployed on the battlefield for the primary time in a partnership with Anduril, completing a year-long reversal of its policy against working with the military.)
While Big Tech is hit with antitrust investigations—including the continued lawsuit against Google about its dominance in search, in addition to a brand new investigation opened into Microsoft—regulators are also accusing AI corporations of, well, just straight-up lying.
On Tuesday, the Federal Trade Commission took motion against the smart-camera company IntelliVision, saying that the corporate makes false claims about its facial recognition technology. IntelliVision has promoted its AI models, that are utilized in each home and industrial security camera systems, as operating without gender or racial bias and being trained on thousands and thousands of images, two claims the FTC says are false. (The corporate couldn’t support the bias claim and the system was trained on only 100,000 images, the FTC says.)
Every week earlier, the FTC made similar claims of deceit against the safety giant Evolv, which sells AI-powered security scanning products to stadiums, K-12 schools, and hospitals. Evolv advertises its systems as offering higher protection than easy metal detectors, saying they use AI to accurately screen for guns, knives, and other threats while ignoring harmless items. The FTC alleges that Evolv has inflated its accuracy claims, and that its systems failed in consequential cases, resembling a 2022 incident after they did not detect a seven-inch knife that was ultimately used to stab a student.
Those add to the complaints the FTC made back in September against a variety of AI corporations, including one which sold a tool to generate fake product reviews and one selling “AI lawyer” services.