As usual, let's recap what we've done this week and preview next week's lecture. We focused on learning or extracting information from data such as figuring out classes from data in an unsupervised manner using clustering. Figuring out rules from data using unsupervised rule mining. And we also took a look at big data, and how just counting techniques work well if you actually have lots of data, and our unified f of x formulation helped up understand clustering, rule mining as well as classification in the same single formulation. We then turn to long data, high dimensional data, and figure out how learn classes and features in an unsupervised model using hidden or latent techniques. Next week, we'll continue our discussion of learning a little bit because the techniques will be very similar to the others that we'll talk about next week as well. So, we'll discuss learning facts from collections of text via Bayesian networks and hidden Markov models returning once more to supervised learning in a different form, not just classification. And then, ask what use such rules and facts are if in fact one doesn't believe John Sterling and one believes that one can reason using rules and facts to connect the dots and make sense of the world just the way we put two and two together. We'll talk about logical, as well as probabilistic reasoning, reasoning under uncertainty, and most importantly, the semantic web, where attempts have been made to put all these reasoning techniques together in the context of data on the web, extracting facts and reasoning about facts as being a higher order capability that today the web doesn't have. But, might well do so in the very near future. So, see you next week.