[MUSIC] Great, we've now seen the notion of overfitting in classification, especially logistic regression. We've seen how decision boundaries get really complicated as we start overfitting. How those parameter values tend to increase. How we get overconfident about our predictions. We introduce the notion of regularization to try to mitigate blowing up of those parameters. By doing so, we've seen how they can take the ideas of logistic regression, which were so far more on the theoretical side, into being ready for real practical work. You should be able to take those ideas and implement them in practice using L2 regularization, and you'll also see how L1 regularization can make sense. We haven't described how to implement that, but it's similar to what you did for the last problem before. [MUSIC]