“This work represents a major step forward in strengthening our information advantage as we combat sophisticated disinformation campaigns and synthetic-media threats,” says Bustamante. Hive was chosen out of a pool of 36 firms to check its deepfake detection and attribution technology with the DOD. The contract could enable the department to detect and counter AI deception at scale.
Defending against deepfakes is “existential,” says Kevin Guo, Hive AI’s CEO. “That is the evolution of cyberwarfare.”
Hive’s technology has been trained on a considerable amount of content, some AI-generated and a few not. It picks up on signals and patterns in AI-generated content which can be invisible to the human eye but could be detected by an AI model.
“Seems that each image generated by certainly one of these generators has that type of pattern in there if you happen to know where to search for it,” says Guo. The Hive team continuously keeps track of recent models and updates its technology accordingly.
The tools and methodologies developed through this initiative have the potential to be adapted for broader use, not only addressing defense-specific challenges but in addition safeguarding civilian institutions against disinformation, fraud, and deception, the DOD said in a press release.
Hive’s technology provides state-of-the-art performance in detecting AI-generated content, says Siwei Lyu, a professor of computer science and engineering on the University at Buffalo. He was not involved in Hive’s work but has tested its detection tools.
Ben Zhao, a professor on the University of Chicago, who has also independently evaluated Hive AI’s deepfake technology, agrees but points out that it is much from foolproof.
“Hive is definitely higher than many of the business entities and a few of the research techniques that we tried, but we also showed that it isn’t in any respect hard to avoid,” Zhao says. The team found that adversaries could tamper with images in a way that bypassed Hive’s detection.