The filing, posted on September 19, is heavily redacted and Hive cofounder and CEO Kevin Guo told that he couldn’t discuss the small print of the contract, but confirmed it involves use of the corporate’s AI detection algorithms for child sexual abuse material (CSAM).
The filing quotes data from the National Center for Missing and Exploited Children that reported a 1,325% increase in incidents involving generative AI in 2024. “The sheer volume of digital content circulating online necessitates the usage of automated tools to process and analyze data efficiently,” the filing reads.
The primary priority of kid exploitation investigators is to seek out and stop any abuse currently happening, however the flood of AI-generated CSAM has made it difficult for investigators to know whether images depict an actual victim currently in danger. A tool that might successfully flag real victims could be an enormous help when they struggle to prioritize cases.
Identifying AI-generated images “ensures that investigative resources are focused on cases involving real victims, maximizing this system’s impact and safeguarding vulnerable individuals,” the filing reads.
Hive AI offers AI tools that create videos and pictures, in addition to a spread of content moderation tools that may flag violence, spam, and sexual material and even discover celebrities. In December, reported that the corporate was selling its deepfake-detection technology to the US military.