Let's recap what we've learned this week. First we spoke about search, indexing, and ranking. But if you will notice we. Left out, an important part. While alluding to, the difficulty. Of, assembling. Results, which are very large. At the same time ranking them efficiently. The homework problems. Are related to this pro, this issue, and I also expect that you'll have some animated discussion on the forum on this particular point. At the same time, please don't give away the exact answers to the homework questions. Though the discussions we will certainly help all those having some difficulty with the homeworks. Next we discussed enterprise search and searching structured data, we found that it's not a solved problem and. It's a little more difficult than searching. Web data for a number of reasons. Finally, we turn to object search, and covered a very important technique called locality sensitive hashing. We also briefly touched upon associative memory, which is a related technique, but doesn't actually store the objects. Now if you think about the problem of searching for objects, and instead of actually wanting to find the objects which are near your query, you only want to find which set of objects this is likely to belong to. In some sense, you want to find the class of objects, without worrying about the exact objects that are similar to one's query. A bit like a associated memory. Well, this is exactly the machine learning problem. And that's we will going to get into next week, as you will find the approach that we are gonna take for machine learning via information theory, is slightly unique and not covered in many normal machine learning textbooks. So, see you then and don't forget to submit your homework by Monday.