So, perhaps if we now return to the question that we began this course with, what does data have to do with intelligence? Any fool can know the point is to understand and the goal of understanding is to predict. The brain, as we have seen, is largely a prediction machine. It also controls our bodies, but it doesn't through prediction. And we've learned how business systems, web applications can use the techniques. For predictive intelligence, that we've learned in this course, which are looking, listening, learning, connecting, predicting and of course, correcting, which we haven't covered. So it's worthwhile recapping what we've learned in each of these elements, just so that one realizes that we've actually covered a lot of ground, and to highlight the most important points that we might have gone through. In Look, we talked about search, a very important technique called locality sensitive hashing. In relationship of search, page rank, etcetera, to memory, and touched upon associative memories in response to shuffled memories which we just realized are crucial to things like hierarchical and temporal memory, which is the latest in predictive intelligence. Went on to listen, we talked, we learned about the naive base classifier and the role of mutual information in figuring out which features are better features and which ones are not. We went on to learn. We looked at a unified frame work for classification as well as clustering, rule mining. And then talked about how latent or hidden models can be used to learn features and classes together. In coonect we covered reasoning, the symantic web vision, how rules can be learned from large volumes of text. Base your networks for reasoning uncertainty. And finally in predict, we've talked about linear regression, linear prediction, dual networks, anarchical temporal memory, a black board architecture. And, of course, in the end an intelligent system also has to translate these predictions into actions through corrections, which involves optimization and planning, which we haven't had time to cover this time. Maybe next time. Along the way we also learned about the load element. How large volumes of data and processing can be executed on modern computer systems emerging from the web. We learned about map produce and the evolution of data bases. So, congratulations! We've, we've really covered a lot and learned a lot in this course, even if at a high level. I hope you all see how things fit together. My goal in this course has been to try to convey the big picture. How many different techniques are different ways of looking at the same thing. Which techniques work well in which situations, and hopefully you'll be ready for some interesting challenges, which I'll point to in a minute. Before that let me point out some deep research problems. First one is looking. It seems very simple, searching. And indexing. But then think about what's involved in looking at data. When I looked at a piece of data, I need to figure out a lot of things. What features am I going to extract from this data? What techniques am I going to use? What insights do I think I can get out of this? These are all reasoning. Elements. Looking at data involves reason. Today all that is done by people. There are very few assistive decision support systems to help people look at data. On a more abstract level, I've already mentioned this. How symbolic reasoning arises from bottom-up, data driven techniques. Be they neural or predictive, classification, etcetera, Clustering. Where do the symbols emerge and where does the rules and reasoning emerge? Where does logic emerge? These things are mysteries which we simply haven't understood adequately. And lastly, any web intelligence system. Real intelligent system like the human being or any business intelligence system requires a purpose. I'm certainly not suggesting that a machine would acquire its own purpose. I'm not suggesting for a moment that free will is anywhere near our grasp in terms of coding it. But at least the level at which we code our purpose is today, very low. We're actually telling the system the real pieces of the puzzle. We're telling it how to look, how to listen, how to learn. How all the pieces fit together. We're giving it the architecture. We're not anywhere near, systems which learn how to put these pieces together by themselves, if given a go. For example, if you were to design a system which controls all the traffic in a city along with self driving cars. How would the system evolve? How would the system make use of better techniques to achieve its stated goals? The stated goals are very clear. It's given to, we give them, give, we give the system its goals, but the system itself is hard coded in terms of how it puts, it puts different techniques together to achieve them. It's not able to reason about how it's reasoning. So it comes to my first problem about reasoning about looking. Can we give systems a higher level purpose, and then let the systems figure out the sub-goals, and figure out how different techniques are put together? I think these three problems are deep, extremely difficult to, to tackle. They can be looked at as pointers to very specific research problems in thousands of different ways. And for graduate students looking at deep problems in big deal analytics or Artificial Intelligence, I think these are good pointers. For those of you looking for more practical challenges I'd like to point you to Kaggle, which is a site where organizations, both governments and companies, post data and problems about that data and invite people to compete for prizes if one can do some great data analytics using that data. There are lots of competitions up there, for example. There's this one about online product sales, predicting the online sales or consumer product based on its features. So that data is used in the latest programming assignment, there are others about different topics such as very large scale data mining using big data, posted by Best Buy. We're talking about mobile, web data, and figuring out which products, user will be most interested in. So, I think that you know, with all the techniques that you've learned or at least been exposed to in this course, you're well versed to solve many of these data mining challenges probably with a little extra reading and a little extra, extra work but you should be able to approach this problems, know which algorithms to use, which packages to look for, which kinds of techniques to apply. So finally, do remember that all remaining quizzes, homeworks and programming assignments included are due on the ninth of November, when the course ends at eleven:5959 PST. The final exam is on Friday, the November ninth. The time will be announced on the on the site. And it will be open, until 23:5959 that day. Though it might be closed for a short period of time after this particular interval, so I can extract the specific grades for IIT and Triple IT. Finally, thanks for being such a great class. I hope you enjoyed the course. I know there have been some, lapses in terms of errors in homework assignments. The arena we've covered is also rather vast, and some of you may have lost track along the way. Do write about this course on say coursedoc for example, so that the next version of the course can be made much, much better. Contact me if anybody wants to research any of the deep problems or enjoy yourself doing great data challenges on Kaggle. Best wishes and goodbye.