The challenge asks for 2 different models. The primary, a task for those with intermediate skills, is one which identifies hateful images; the second, considered a sophisticated challenge, is a model that attempts to idiot the primary one. “That truly mimics how it really works in the true world,” says Chowdhury. “The do-gooders make one approach, after which the bad guys make an approach.” The goal is to have interaction machine-learning researchers on the subject of mitigating extremism, which can result in the creation of recent models that may effectively screen for hateful images.
A core challenge of the project is that hate-based propaganda might be very depending on its context. And someone who doesn’t have a deep understanding of certain symbols or signifiers may not have the option to inform what even qualifies as propaganda for a white nationalist group.
“If [the model] never sees an example of a hateful image from a component of the world, then it’s not going to be any good at detecting it,” says Jimmy Lin, a professor of computer science on the University of Waterloo, who is just not related to the bounty program.
This effect is amplified around the globe, since many models don’t have an unlimited knowledge of cultural contexts. That’s why Humane Intelligence decided to partner with a non-US organization for this particular challenge. “Most of those models are sometimes fine-tuned to US examples, which is why it’s vital that we’re working with a Nordic counterterrorism group,” says Chowdhury.
Lin, though, warns that solving these problems may require greater than algorithmic changes. “We now have models that generate fake content. Well, can we develop other models that may detect fake generated content? Yes, that’s actually one approach to it,” he says. “But I believe overall, in the long term, training, literacy, and education efforts are literally going to be more helpful and have a longer-lasting impact. Since you’re not going to be subjected to this cat-and-mouse game.”
The challenge will run till November 7, 2024. Two winners might be chosen, one for the intermediate challenge and one other for the advanced; they’ll receive $4,000 and $6,000, respectively. Participants may even have their models reviewed by Revontulet, which can determine so as to add them to its current suite of tools to combat extremism.