Grace Yee is the Senior Director of Ethical Innovation (AI Ethics and Accessibility) at Adobe, driving global, organization-wide work around ethics and developing processes, tools, trainings, and other resources to assist be sure that Adobe’s industry-leading AI innovations continually evolve according to Adobe’s core values and ethical principles. Grace advances Adobe’s commitment to constructing and using technology responsibly, centering ethics and inclusivity in the entire company’s work developing AI. As a part of this work, Grace oversees Adobe’s AI Ethics Committee and Review Board, which makes recommendations to assist guide Adobe’s development teams and reviews recent AI features and products to make sure they live as much as Adobe’s principles of accountability, responsibility and transparency. These principles help ensure we bring our AI powered features to market while mitigating harmful and biased outcomes. Grace moreover works with the policy team to drive advocacy helping to shape public policy, laws, and regulations around AI for the advantage of society.
As a part of Adobe’s commitment to accessibility, Grace helps be sure that Adobe’s products are inclusive of and accessible to all users, in order that anyone can create, interact and have interaction with digital experiences. Under her leadership, Adobe works with government groups, trade associations and user communities to advertise and advance accessibility policies and standards, driving impactful industry solutions.
Are you able to tell us about Adobe’s journey over the past five years in shaping AI Ethics? What key milestones have defined this evolution, especially within the face of rapid advancements like generative AI?
Five years ago, we formalized our AI Ethics process by establishing our AI Ethics principles of accountability, responsibility, and transparency, which function the inspiration for our AI Ethics governance process. We assembled a various, cross-functional team of Adobe employees from around the globe to develop actionable principles that may stand the test of time.
From there, we developed a strong review process to discover and mitigate potential risks and biases early within the AI development cycle. This multi-part assessment has helped us discover and address features and products that might perpetuate harmful bias and stereotypes.
As generative AI emerged, we adapted our AI Ethics assessment to handle recent ethical challenges. This iterative process has allowed us to remain ahead of potential issues, ensuring our AI technologies are developed and deployed responsibly. Our commitment to continuous learning and collaboration with various teams across the corporate has been crucial in maintaining the relevance and effectiveness of our AI Ethics program, ultimately enhancing the experience we deliver to our customers and promoting inclusivity.
How do Adobe’s AI Ethics principles—accountability, responsibility, and transparency—translate into day by day operations? Are you able to share any examples of how these principles have guided Adobe’s AI projects?
We adhere to Adobe’s AI Ethics commitments in our AI-powered features by implementing robust engineering practices that ensure responsible innovation, while repeatedly gathering feedback from our employees and customers to enable essential adjustments.
Latest AI features undergo an intensive ethics assessment to discover and mitigate potential biases and risks. Once we introduced Adobe Firefly, our family of generative AI models, it underwent evaluation to mitigate against generating content that might perpetuate harmful stereotypes. This evaluation is an iterative process that evolves based on close collaboration with product teams, incorporating feedback and learnings to remain relevant and effective. We also conduct risk discovery exercises with product teams to grasp potential impacts to design appropriate testing and feedback mechanisms.
How does Adobe address concerns related to bias in AI, especially in tools utilized by a worldwide, diverse user base? Could you give an example of how bias was identified and mitigated in a particular AI feature?
We’re repeatedly evolving our AI Ethics assessment and review processes in close collaboration with our product and engineering teams. The AI Ethics assessment we had a couple of years ago is different than the one now we have now, and I anticipate additional shifts in the longer term. This iterative approach allows us to include recent learnings and address emerging ethical concerns as technologies like Firefly evolve.
For instance, after we added multilingual support to Firefly, my team noticed that it wasn’t delivering the intended output and a few words were being blocked unintentionally. To mitigate this, we worked closely with our internationalization team and native speakers to expand our models and canopy country-specific terms and connotations.
Our commitment to evolving our assessment approach as technology advances is what helps Adobe balance innovation with ethical responsibility. By fostering an inclusive and responsive process, we ensure our AI technologies meet the best standards of transparency and integrity, empowering creators to make use of our tools with confidence.
Along with your involvement in shaping public policy, how does Adobe navigate the intersection between rapidly changing AI regulations and innovation? What role does Adobe play in shaping these regulations?
We actively engage with policymakers and industry groups to assist shape policy that balances innovation with ethical considerations. Our discussions with policymakers give attention to our approach to AI and the importance of developing technology to reinforce human experiences. Regulators seek practical solutions to handle current challenges and by presenting frameworks like our AI Ethics principles—developed collaboratively and applied consistently in our AI-powered features—we foster more productive discussions. It’s crucial to bring concrete examples to the table that display how our principles work in motion and to indicate real-world impact, versus talking through abstract concepts.
What ethical considerations does Adobe prioritize when sourcing training data, and the way does it be sure that the datasets used are each ethical and sufficiently robust for the AI’s needs?
At Adobe, we prioritize several key ethical considerations when sourcing training data for our AI models. As a part of our effort to design Firefly to be commercially secure, we trained it on dataset of licensed content similar to Adobe Stock, and public domain content where copyright has expired. We also focused on the range of the datasets to avoid reinforcing harmful biases and stereotypes in our model’s outputs. To realize this, we collaborate with diverse teams and experts to review and curate the info. By adhering to those practices, we attempt to create AI technologies that usually are not only powerful and effective but additionally ethical and inclusive for all users.
In your opinion, how necessary is transparency in communicating to users how Adobe’s AI systems like Firefly are trained and how much data is used?
Transparency is crucial in the case of communicating to users how Adobe’s generative AI features like Firefly are trained, including the sorts of data used. It builds trust and confidence in our technologies by ensuring users understand the processes behind our generative AI development. By being open about our data sources, training methodologies, and the moral safeguards now we have in place, we empower users to make informed decisions about how they interact with our products. This transparency not only aligns with our core AI Ethics principles but additionally fosters a collaborative relationship with our users.
As AI continues to scale, especially generative AI, what do you think that can be essentially the most significant ethical challenges that corporations like Adobe will face within the near future?
I imagine essentially the most significant ethical challenges for corporations like Adobe are mitigating harmful biases, ensuring inclusivity, and maintaining user trust. The potential for AI to inadvertently perpetuate stereotypes or generate harmful and misleading content is a priority that requires ongoing vigilance and robust safeguards. For instance, with recent advances in generative AI, it’s easier than ever for “bad actors” to create deceptive content, spread misinformation and manipulate public opinion, undermining trust and transparency.
To handle this, Adobe founded the Content Authenticity Initiative (CAI) in 2019 to construct a more trustworthy and transparent digital ecosystem for consumers. The CAI implements our solution to construct trust online– called Content Credentials. Content Credentials include “ingredients” or necessary information similar to the creator’s name, the date a picture was created, what tools were used to create a picture and any edits that were made along the best way. This empowers users to create a digital chain of trust and authenticity.
As generative AI continues to scale, it would be much more necessary to advertise widespread adoption of Content Credentials to revive trust in digital content.
What advice would you give to other organizations which are just beginning to take into consideration ethical frameworks for AI development?
My advice could be to start by establishing clear, easy, and practical principles that may guide your efforts. Often, I see corporations or organizations focused on what looks good in theory, but their principles aren’t practical. The explanation why our principles have stood the test of time is because we designed them to be actionable. Once we assess our AI powered features, our product and engineering teams know what we’re on the lookout for and what standards we expect of them.
I’d also recommend organizations come into this process knowing it will be iterative. I may not know what Adobe goes to invent in five or 10 years but I do know that we’ll evolve our assessment to satisfy those innovations and the feedback we receive.