Home Artificial Intelligence Reducing bias and improving safety in DALL·E 2

Reducing bias and improving safety in DALL·E 2

6
Reducing bias and improving safety in DALL·E 2

In April, we began previewing the DALL·E 2 research to a limited number of individuals, which has allowed us to raised understand the system’s capabilities and limitations and improve our safety systems.

During this preview phase, early users have flagged sensitive and biased images which have helped inform and evaluate this recent mitigation.

We’re continuing to research how AI systems, like DALL·E, might reflect biases in its training data and other ways we are able to address them.

Through the research preview we’ve got taken other steps to enhance our safety systems, including:

  • Minimizing the chance of DALL·E being misused to create deceptive content by rejecting image uploads containing realistic faces and attempts to create the likeness of public figures, including celebrities and distinguished political figures.
  • Making our content filters more accurate so that they’re simpler at blocking prompts and image uploads that violate our content policy while still allowing creative expression.
  • Refining automated and human monitoring systems to protect against misuse.

These improvements have helped us gain confidence in the flexibility to ask more users to experience DALL·E.

Expanding access is a very important a part of our deploying AI systems responsibly since it allows us to learn more about real-world use and proceed to iterate on our safety systems.

6 COMMENTS

  1. Thank you for your sharing. I am worried that I lack creative ideas. It is your article that makes me full of hope. Thank you. But, I have a question, can you help me?

LEAVE A REPLY

Please enter your comment!
Please enter your name here