A few more important points about belief networks, which are essentially Bayesian networks and their generalizations. So far, we have talked about using, Bayesian networks to model simple situations like, the sprinkler, rain and, wetness, etc., as well as to learn facts from text. But, we haven't asked where these networks come from. So far, we have used networks which we have imagined from intuition or judgement. It turns out that, not only can the conditional probabilities in the networks can be learned from data, but the network structure itself can be learned from data. One of the greatest examples of learning network structure from large volumes of data has been in genomic medicine, or in medicine in general. Medical diagnosis, for example. What treatments are best for what symptoms has been around for many years, and has used Bayesian networks very successfully to build assisted systems for medical diagnostics, especially in regions where there aren't that many qualified doctors. In genomic medicine, the relationship between different genes expressing themselves in an organism has been learned from large volumes of experimental data using, using techniques for learning the structure of Bayesian networks. Similarly, how phenotypes that is traits that are exhibited in an organism arise from the genes of that organism, has also been, learned Bayesian networks which can be inferred from large volumes of data. When it comes to logic and uncertainty, there is a growing realization that belief networks are really bridging the gap between the fundamental limits of logic and the pitfalls of uncertainty. The indication of this is the fact that Judea Pearl, who invented Bayesian networks was given the Turing award, which is the highest award in Computer Science, in 2012. His initial work on Bayesian networks was in the early 90s' and in fact, he's written a recent book on causality, which is still a deep subject, not completely covered or even explored using Bayesian networks. Other kinds of networks that merge logic and probability are Markov logic networks, conditional learning fields, and many others. We won't have time to even touch these in this course, but they are all forms of belief networks that bridge the gap between logic and uncertainty. Coming to big data, Well, we have seen that inference in such networks can be done using SQL We've shown this for Bayesian networks but the fact is that it's all counting and map-reduce actually works. Because if we're just doing counting in SQL, we can actually do inference from large volumes of data using the big data technologies. As regards our hidden agenda about AI, Deep belief networks, which we will study a little bit in the last week of lectures, is one direction in which connectionist models of the brain are being explored and extended, which sort of brings everything back together, that all the things that we're studying in probability, statistics, Bayesian networks learning, eventually is teaching us more and more about how the brain works. This has been a long lecture. We've covered a lot of ground. And it's worth recapping what we've actually learned. First, we began by saying that search is not enough for general question and answering on the web, which lead us to reasoning. Logic and the semantic web vision is one way of trying to address this problem of how computationally a web intelligence system could actually answer a general purpose question. We learned, of course, that there are fundamental limits to logic as well as practical ones arising from uncertainty. We went into studying how reasoning under uncertainty could be handled using Bayesian networks and probabilistic graphical models in general. Though we didn't cover the latter, we indicated the direction in which this field is going. In the next few weeks, we will have a programming assignment next week. This will be on Bayesian inference using SQL. We will have a short lecture video next week to explain the assignment, but do start preparing by studying the SQL-based inference that we've done in this week as well as experimenting with a sequel engine of your choice, I would suggest SQLITE3, SQL Lite three, which comes bundled with Python, and for which one can use an in-memory data base which will pretty much suffice, since the tables we'll be using will be quite small. The final week, will have the predict lecture, where we'll put everything together, as well as have our final programming assignment and then the last week, where there'll be the final exam, and you'll be asked to complete all your assignments by then. So, do complete the homework and quiz for this week and start preparing for Bayesian inference using SQL and experimenting with SQL Lite for the next programming assignment. The last week, which will have the final exam, and the submission dates for all the programming assignments from here onwards. So, please prepare for next week's programming assignment and do remember to complete the quiz and homework for this week, which are due only next Friday, since we don't have a full lecture next week.