Today’s AI has a difficult equation to unravel.


In keeping with Gartner in 2019, “85% of AI Implementations Will Fail By 2022”. Whether this dire prediction proved to be true is irrelevant. I hereby propose a possible explanation for why AI implementations may fail.

When using Machine Learning models or computing statistics, our ultimate goal is to base our future decisions on the expected outcomes.

The primary issue with taking motion based on Machine Learning is that the very motion undertaken in consequence of the model’s predictions changes the information used to create the model, making the model’s predictions inaccurate or biased.

One other issue arises after we use Machine Learning to derive feature importances and derive actions: in doing so, we frequently mistake correlations for actual causes.

As we are able to see, these are two very different perspectives. Most Machine Learning models adopt a Passive Observer standpoint, yet we use them as in the event that they were learned with the Motion-based Observer perspective.

This series of posts

As a Statistician, Data Scientist, or Data Product Owner, have you ever reflected in your approach to addressing this problem, and if that’s the case, how have you ever tackled it?


What are your thoughts on this topic?
Let us know in the comments below.


0 0 votes
Article Rating
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

Would love your thoughts, please comment.x