[MUSIC] Wow, exciting. We got through our first module. We got through a representation for linear classifier, in much more depth than we did in the first course. In fact, we had a chance to dive in and really understand how these models are built, the parameters and then the length foundation associated with them, as well as how to extend it to other settings. So now we're able to take our learning loop that goes from training data to creating features h(x) to this logistic regression model. And define a learning algorithm which we've talked in high level only now but we're going to talk in more detail in the next module, to come up with w hat and we're going to make predictions from it. So, now you can take everything you've learned and be able to talk about what does a linear classifier look like, what's the logistic reaction model, how that relates to the general score function, like we have with regression, but squeezes it to the interval 0, 1 to predict probabilities. How it's affected by parameters, and how to extend that to multi class setting and how you do with category called data. So, we done a lot. We're really ready to really think deeply about classifiers. [MUSIC]