[MUSIC] In this module, we will discuss some very important fundamental concept, which is evaluating classifiers. And in particular, we talked about precision-recall, which is a concept that's widely used, way beyond some of the classifiers we talked about here. So basically any classification problem you're going to see in industry. We saw that just straight accuracy or error metrics may not be the right things for your application. And you need to look at something else and precision recall is one of the first things you might want to look at. Precision captures the fraction of your top of your positive predictions that are actually positive and recall talks about of all positive predictions positive sentences out there, which ones did you find, which ones did you label as positive. So we talked about the trade-off in between precision and recall, and how you can navigate that trade-off with that trade-off parameter t, in terms of probability, and really get this beautiful precision trade-off curves. And finally, we talked about comparing models with this precision at k metric, which is one that I particularly like for a lot of applications. I want to take a moment here to thank my colleague, Krishna Sridhar, who is instrumental at helping these slides come together and creating the ideas behind this module [MUSIC]