No free lunch (NFL) theorem

This short explains why there is no such a thing as a free lunch in the ML world.

“All models are wrong but some are useful”

– George Box

In ML, we construct models for a problem and with model comparison, we try to find the best model. However, the free lunch theorem states that there is no universally best model – and this idea is often referred to as No Free Lunch Theorem.

Intuitively, the assumption that made a model the best model for a given problem may be very different from the assumptions of another problem, and hence the model we use now may be completely different from the best model of another problem.

Thus, ML needs to develop many different methods/ models and we need to be able to compare them  to find a good one for our purposes.

Thanks for reading!

References

Bishop, Christopher M. Pattern recognition and machine learning. springer, 2006.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Create a website or blog at WordPress.com

Up ↑

%d bloggers like this: