Later on in this course, we will show how natural language processing techniques can be used to extract deeper information from human language. Let's now recap what we've learned this week. We actually started with the information theory from Shannon that was invented for a totally different context, looked at the concept of mutual information and applied it to the statistics of language, showed how we could summarize documents and discern the best possible keywords using TF-IDF which turned out to be exactly the same thing as mutual information. We figured out the relationship between communication and machine learning in terms of mutual information learning again, and learned the very important nieve base classifier, which is the foundation for almost all machine learning techniques. Then we summarized with the limits of machine learning, both from the information theoretic perspective which also told us which features to use and which not to use. And lastly we ended with some suspicions about whether the bag of words approach that we'd used just considering without their grammatical syntax or semantics was actually enough to discern meaning. In future class we will ask questions such as where features themselves come from. For the moment, we have chosen features like words and we have labeled passed data manually or by experience, such as buyers and browsers. In our lives however, the labels and the features need to be derived automatically by us as we learn about the world with no supervision or nobody telling us what's the feature and what's not. Before we come to those very interesting ideas in the world of learning, we'll first take an excursion into big data technology next week as we have promised. We'll describe how the new technologies that were developed in the web world differs significantly from traditional technologies. And then we'll do some experiments and assignments of how they can be used for indexing, page rank, computing TF-IDF, implementing naive Bayes classifiers, computing mutual information, and all the nice stuff that we have learned so far, including locality sensitive hashing, that we did last week. We've learned a lot of theory, done some calculations. And now, get ready for doing some implementation, And programming. So see you next week. And of course, don't forget to submit your homework by Monday.