Home Artificial Intelligence How one can create, release, and share generative AI responsibly

How one can create, release, and share generative AI responsibly

1
How one can create, release, and share generative AI responsibly

“If we really need to deal with these issues, we’ve got to get serious,” says Farid. For instance, he wants cloud service providers and app stores equivalent to those operated by Amazon, Microsoft, Google, and Apple, that are all a part of the PAI, to ban services that allow people to make use of deepfake technology with the intent to create nonconsensual sexual imagery. Watermarks on all AI-generated content must also be mandated, not voluntary, he says. 

One other vital thing missing is how the AI systems themselves could possibly be made more responsible, says Ilke Demir, a senior research scientist at Intel who leads the corporate’s work on the responsible development of generative AI. This might include more details on how the AI model was trained, what data went into it, and whether generative AI models have any biases. 

The rules haven’t any mention of ensuring that there’s no toxic content in the info set of generative AI models. “It’s one of the significant ways harm is brought on by these systems,” says Daniel Leufer, a senior policy analyst on the digital rights group Access Now. 

The rules include a listing of harms that these firms want to forestall, equivalent to fraud, harassment, and disinformation. But a generative AI model that all the time creates white people can also be doing harm, and that isn’t currently listed, adds Demir.

Farid raises a more fundamental issue. For the reason that firms acknowledge that the technology could lead on to some serious harms and offer ways to mitigate them, “why aren’t they asking the query ‘Should we do that in the primary place?’”

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here