[MUSIC] So in summary, we've presented this concept of ridge regression, which is a regularized form of standard linear regression. It allows us to account for having lots and lots of features in a very straightforward way, both intuitively and algorithmically, as we've explored in this module. And what ridge regression is allowing us to do is automatically perform this bias variance tradeoff. So we thought about how to perform ridge regression for a specific value of lambda, and then we talked about this method of cross validation in order to select the actual lambda we're gonna use for our models that we would use to make predictions. So in summary, we've described why ridge regression might be a reasonable thing to do. Motivating that the magnitude term that ridge regression introduces, the magnitude of the coefficients. Penalizing that makes sense from the standpoint of over-fitted models tend to have very large magnitude coefficients. Then we talked about the actual ridge regression objective and thinking about how it's balancing fit with the magnitude of these coefficients. And we talked about how to fit the model both as a closed form solution as well as creating a descent. And then how to choose our value of lambda using cross validation, and that method generalizes well beyond regression, let alone just ridge regression. And then finally, we talked about how to deal with the intercept term, if you wanna handle that specifically. [MUSIC]