[MUSIC] In this module we've covered a lot of ground. To start with we motivated taking a probabilistic model-based approach to clustering, and showed mixtures of Gaussians as a special example of such an approach. Then we presented the Algorithm, which is a very generally useful algorithm, and we specified it specifically for mixtures of Gaussians. And throughout the module we've compared and contrasted these model based approaches with the K-means algorithm that we described in the last module. And what we saw in this module is that you can actually view K-means as a special case of For mixtures of Gaussians. So, thinking about mixtures of Gaussians and In order to infer our cluster parameters and our soft assignments, really is a generalization of the K-means algorithm that we showed before. But there is a cost to it, because there are a lot more parameters that we have to think about learning from data. And in addition, there's a computational cost to doing Instead of K-means, both when we're computing our responsibilities and when we're estimating our model parameters. Each of these steps is more intensive than the steps that you have to perform in K-means. So it is a trade-off in terms of flexibility and the descriptive output you get. You get the soft assignments capturing uncertainty and you can do different things with that, but there is a cost to it. And in summary, I'll just leave you with a list of things that you should be able to do having watched this module. [MUSIC]