[MUSIC] In the last module, we talked about the potential for high complexity models to become overfit to the data. And we also discussed this idea of a bias-varience tradeoff. Where high complexity models could have very low bias, but high variance. Whereas low complexity models have high bias, but low variance. And we said that we wanted to trade off between bias and variance to get to that sweet spot of having good predictive performance. And in this module, what we're gonna do is talk about a way to automatically balance between bias and variance using something called ridge regression. So let's recall this issue of overfitting in the context of polynomial regression. And remember, this is our polynomial regression model. And if we assume we have some low order of polynomial that we're fitting to our data, we might get a fit that looks like the following. This is just a quadratic fit to the data. But once we get to a much higher order polynomial, we can get these really wild fits to our training observations. Again, this is an instance of a high variance model. But we refer to this model or this fit as being overfit. Because it is very, very well tuned to our training observations, but it doesn't generalize well to other observations we might see. So, previously we had discussed a very formal notion of what it means for a model to be overfit. In terms of the training error being less than the training error of another model, whose true error is actually smaller than the true error of the model with smaller training error. Okay, hopefully you remember that from the last module. But a question we have now is, is there some type of quantitative measure that's indicative of when a model is overfit? And to see this, let's look at the following demo, where what we're going to show is that when models become overfit, the estimated coefficients of those models tend to become really, really, really large in magnitude. [MUSIC]