Your AI Model Is Not Objective

-

Opinion

Where we explore the subjectiveness in AI models and why it is best to care

I recently visited a conference, and a sentence on considered one of the slides really struck me. The slide mentioned that they where developing an AI model to switch a human decision, and that the model was, quote, “objective” in contrast to the human decision. After fascinated about it for a while, I vehemently disagreed with that statement as I feel it tends to isolate us from the people for which we create these model. This in turn limits the impact we are able to have.

On this opinion piece I need to clarify where my disagreement with AI and objectiveness comes from, and why the concentrate on “objective” poses an issue for AI researchers who need to have impact in the true world. It reflects insights I actually have gathered from the research I actually have done recently on why many AI models don’t reach effective implementation.

Photo by Vlad Hilitanu on Unsplash

To get my point across we want to agree on what we mean exactly with objectiveness. On this essay I exploit the next definition of Objectiveness:

expressing or coping with facts or conditions as perceived without distortion by personal feelings, prejudices, or interpretations

For me, this definition speaks to something I deeply love about math: throughout the scope of a mathematical system we are able to reason objectively what the reality is and the way things work. This appealed strongly to me, as I discovered social interactions and feelings to be very difficult. I felt that if I worked hard enough I could understand the maths problem, while the true world was way more intimidating.

As machine learning and AI is built using math (mostly algebra), it’s tempting to increase this same objectiveness to this context. I do think as a mathematical system, machine learning may be seen as objective. If I lower the educational rate, we must always mathematically give you the option predict what the impact on the resulting AI must be. Nonetheless, with our ML models becoming larger and way more black box, configuring them has change into increasingly more an art as a substitute of a science. Intuitions on the way to improve the performance of a model generally is a powerful tool for the AI researcher. This sounds awfully near “personal feelings, prejudices, or interpretations”.

But where the subjectiveness really kicks in is where the AI model interacts with the true world. A model can predict what the probability is that a patient has cancer, but how that interacts with the actual medical decisions and treatment incorporates loads of feelings and interpretations. What is going to the impact of treatment be on the patient, and is the treatment value it? What’s the mental state of a patient, and may they bear the treatment?

However the subjectiveness doesn’t end with the applying of the end result of the AI model in the true world. In how we construct and configure a model, loads of selections should be made that interact with reality:

  • What data can we include within the model or not. Which patients can we determine are outliers?
  • Which metric can we use to judge our model? How does this influence the model we find yourself creating? What metric steers us towards a real-world solution? Is there a metric in any respect that does this?
  • What can we define the actual problem to be that our model should solve? It will influence the choice we make in regard to configuration of the AI model.

So, where the true world engages with AI models quite a little bit of subjectiveness is introduced. This is applicable to each technical selections we make and in how the end result of the model interacts with the true world.

In my experience, considered one of the important thing limiting aspects in implementing AI models in the true world is close collaboration with stakeholders. Be they doctors, employees, ethicists, legal experts, or consumers. This lack of cooperation is partly because of the isolationist tendencies I see in lots of AI researchers. They work on their models, ingest knowledge from the web and papers, and check out to create the AI model to one of the best of their abilities. But they’re focused on the technical side of the AI model, and exist of their mathematical bubble.

I feel that the conviction that AI models are objective reinsures the AI researcher that this isolationism is superb, the objectiveness of the model implies that it could actually be applied in the true world. But the true world is stuffed with “feelings, prejudices and interpretations”, making an AI model that impacts this real world also interact with these “feelings, prejudices and interpretations”. If we would like to create a model that has impact in the true world we want to include the subjectiveness of the true world. And this requires constructing a robust community of stakeholders around your AI research that explores, exchanges and debates all these “feelings, prejudices and interpretations”. It requires us AI researchers to come back out of our self-imposed mathematical shell.

Note: If you ought to read more about doing research in a more holistic and collaborative way, I highly recommend the work of Tineke Abma, for instance this paper.

Should you enjoyed this text, you may also enjoy a few of my other articles:

ASK DUKE

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x