[MUSIC] So in summary, we've talked about nearest neighbor and kernel regression. And these are, as we've seen, really simple approaches. Very simple to think about intuitively, and really simple to implement in practice. But they have surprisingly good performance in just a very wide range of different applications. And some things in particular that we talked about in this module are how to perform one nearest neighbor or k-NN regression. And we also talked about ideas of weighting our k-NNs, leading us to this idea of doing kernel regression. And for this, there was this choice of our bandwidth parameter, which is kind of akin to the k and k-NN. And we said we could just choose this using cross validation. And then we talked about some of the theoretical and practical aspects of k-NN and kernel regression. Talking about some really nice properties of k-NN as you get lots and lots of data. But also some computational challenges that you run into. And challenges you run into if you don't have a lot of data or if you're in really high dimensional input spaces. And finally, we talked about how one can use k-NN for classification. And we're gonna talk a lot more about classification in the next course, which is all about classification, so stay tuned for that course. [MUSIC]