Linear

Non-Linearity: Can Linear Regression Compete With Gradient Boosting?

Some weeks ago, I published a post on LinkedIn.The post was based on the next figure, comparing the predictions made by two models: Linear Regression, and CatBoost.This began a discussion, and I discovered the...

Bayesian Linear Regression: A Complete Beginner’s guide

A workflow and code walkthrough for constructing a Bayesian regression model in STANNote: Try my previous article for a practical discussion on why Bayesian modeling could also be the appropriate selection in your task.This...

Linear Programming Optimization: The Simplex Method

Part 3: The algorithm under the hoodUp until now, this series has covered the fundamentals of linear programming. In this text, we're going to move from basic concepts into the main points under the...

Linear Programming Optimization: Foundations

Part 1 - Basic Concepts and ExamplesLinear programming is a strong optimization technique that's used to enhance decision making in lots of domains. That is the primary part in a multi-part series that may...

8 Plots for Explaining Linear Regression to a Layman

Explain regression to a non-technical audience with residual, weight, effect and SHAP plots“And don’t use any math” was my manager’s instruction.How else am I presupposed to explain how regression works!?Little did I do know...

A bird’s eye view of linear algebra: the fundamentals

We predict basis-free, we write basis-free, but when the chips are down we close the office door and compute with matrices like fury.Linear algebra is a fundamental discipline underlining anything one can do with...

An Accessible Derivation of Linear Regression

The maths behind the model, from additive assumptions to pseudoinverse matricesTechnical disclaimer: It is feasible to derive a model without normality assumptions. We’ll go down this route since it’s straightforward enough to grasp and...

Theoretical Deep Dive into Linear Regression The Data Generation Process What Are We Actually Minimizing? Minimize The Loss Function Conclusion

You need to use some other prior distribution on your parameters to create more interesting regularizations. You may even say that your parameters w are normally distributed but with some correlation matrix Σ.Allow...

Recent posts

Popular categories

ASK ANA