Home Artificial Intelligence Good governance essential for enterprises deploying AI

Good governance essential for enterprises deploying AI

10
Good governance essential for enterprises deploying AI

That is great. Thanks for that detailed explanation. So since you personally focus on governance, how can enterprises balance each providing safeguards for artificial intelligence and machine learning deployment, but still encourage innovation?

So balancing safeguards for AI/ML deployment and inspiring innovation will be really difficult tasks for the enterprises. It’s large scale, and it’s changing extremely fast. Nevertheless, that is critically vital to have that balance. Otherwise, what’s the point of getting the innovation here? There are a couple of key strategies that can assist achieve this balance. Primary, establish clear governance policies and procedures, review and update existing policies where it might not suit AI/ML development and deployment at latest policies and procedures that is needed, equivalent to monitoring and continuous compliance as I discussed earlier. Second, involve all of the stakeholders within the AI/ML development process. We start from data engineers, the business, the info scientists, also ML engineers who deploy the models in production. Model reviewers. Business stakeholders and risk organizations. And that is what we’re specializing in. We’re constructing integrated systems that provide transparency, automation and good user experience from starting to finish.

So all of this can help with streamlining the method and bringing everyone together. Third, we would have liked to construct systems not only allowing this overall workflow, but in addition captures the info that permits automation. Oftentimes lots of the activities happening within the ML lifecycle process are done through different tools because they reside from different groups and departments. And that leads to participants manually sharing information, reviewing, and signing off. So having an integrated system is critical. 4, monitoring and evaluating the performance of AI/ML models, as I discussed earlier on, is actually vital because if we do not monitor the models, it can even have a negative effect from its original intent. And doing this manually will stifle innovation. Model deployment requires automation, so having that is vital as a way to allow your models to be developed and deployed within the production environment, actually operating. It’s reproducible, it’s operating in production.

It’s extremely, very vital. And having well-defined metrics to observe the models, and that involves infrastructure model performance itself in addition to data. Finally, providing training and education, since it’s a gaggle sport, everyone comes from different backgrounds and plays a distinct role. Having that cross understanding of the complete lifecycle process is actually vital. And having the education of understanding what’s the proper data to make use of and are we using the info appropriately for the use cases will prevent us from much afterward rejection of the model deployment. So, all of those I believe are key to balance out the governance and innovation.

So there’s one other topic here to be discussed, and also you touched on it in your answer, which was, how does everyone understand the AI process? Could you describe the role of transparency within the AI/ML lifecycle from creation to governance to implementation?

Sure. So AI/ML, it’s still fairly latest, it’s still evolving, but basically, people have settled in a high-level process flow that’s defining the business problem, acquiring the info and processing the info to unravel the issue, after which construct the model, which is model development after which model deployment. But prior to the deployment, we do a review in our company to make sure the models are developed based on the proper responsible AI principles, after which ongoing monitoring. When people talk concerning the role of transparency, it’s about not only the flexibility to capture all of the metadata artifacts across the complete lifecycle, the lifecycle events, all this metadata must be transparent with the timestamp so that folks can know what happened. And that is how we shared the data. And having this transparency is so vital since it builds trust, it ensures fairness. We want to make sure that that the proper data is used, and it facilitates explainability.

There’s this thing about models that should be explained. How does it make decisions? After which it helps support the continued monitoring, and it may be done in several means. The one thing that we stress very much from the start is knowing what’s the AI initiative’s goals, the use case goal, and what’s the intended data use? We review that. How did you process the info? What’s the info lineage and the transformation process? What algorithms are getting used, and what are the ensemble algorithms which are getting used? And the model specification must be documented and spelled out. What’s the limitation of when the model must be used and when it shouldn’t be used? Explainability, auditability, can we actually track how this model is produced throughout the model lineage itself? And likewise, technology specifics equivalent to infrastructure, the containers wherein it’s involved, because this actually impacts the model performance, where it’s deployed, which business application is definitely consuming the output prediction out of the model, and who can access the selections from the model. So, all of those are a part of the transparency subject.

Yeah, that is quite extensive. So considering that AI is a fast-changing field with many emerging tech technologies like generative AI, how do teams at JPMorgan Chase keep abreast of those latest inventions while then also selecting when and where to deploy them?

10 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here