What We Still Don’t Understand About Machine Learning

-

It’s surprising how a number of the basic subjects in machine learning are still unknown by researchers and despite being fundamental and customary to make use of, appear to be mysterious. It’s a fun thing about machine learning that we construct things that work after which work out why they work in any respect!

Here, I aim to analyze the unknown territory in some machine learning concepts with a view to show while these ideas can seem basic, in point of fact, they’re constructed by layers upon layers of abstraction. This helps us to practice questioning the depth of our knowledge.

In this text, we explore several key phenomena in deep learning that challenge our traditional understanding of neural networks.

  • We start with Batch Normalization and its underlying mechanisms that remain not fully understood.
  • We examine the counterintuitive statement that overparameterized models often generalize higher, contradicting the classical machine learning theories.
  • We explore the implicit regularization effects of gradient descent, which appear to naturally bias neural networks towards simpler, more…
ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x