[MUSIC] So congratulations, you guys have made it to the end of the last module of this course and so we've covered lots and lots of material in this course and in particular the last couple of modules have been quite advanced and quite intense. So in this module, in particular, we covered Latent Dirichlet Allocation, or LDA, and Gibbs sampling. So LDA is a really widely used tool for mixed membership modeling and text corpora, but variance of the LDA model could be used for mixed membership modeling and a huge range of different applications. And we also covered Gibbs sampling, which is the most widely used algorithm for Bayesian inference. We specifically examined it in the context of LDA, but it tends to be the most straightforward algorithm to think about deriving the updates for in any Bayesian model. That said, it's not always the most scalable algorithm, especially just the first implementation you might think of writing down, but there's a lot of work in a community. I'm thinking about scaling up Gibbs sampling or Gibbs sampling type algorithms to really large data sets and really big models. So in conclusion, I want to leave you with a list of things that you should be able to do now that you've completed this module. So take a little bit of time to reflect on this list and think about all the advances that you've made. And finally, I want to give a big thanks to David Mimno for providing an outline of the example that we walked through when we talked about collapse Gibbs sampling and LDA. [MUSIC]