We have our same tables as before. But now we have, not only the evidence that the grass is wet, but also the sprinkler was on. Now, let's see how the probabilities work out. We have the probability of R, given W, and S times the probability of W and S is the joint. We take the probability of WS to the other side by calling its inverse sigma as before, and then we have the remain, the other expansion for the joint, with the, the probability of W given RS, times probability of S, times probability of R. Notice that the property of R and S is inwarded inside the sigma and we also have probability of S over here. In practice, they are cancel out, but for our purposes it's instructed to leave them as they are. Equation is perfectly valid as it is. And executing it in sequel simply by doing the joint with the restriction that W, N, S are yes, Gives us select R Sum of the products of the all the pieace from all the tables, Where W is yes, S is yes. And now the only common variable is R, since S is yes, is being restricted. And then we group by R. The results, since we're taking the case where S and R are both, or S and W are both yes, we get nine, times point three times point two as the only entry for R equal TS.. Nothing to sum up because the other pieces where s equal to no were left out, And which is point zero five four and similarly point seven times point three. Again, point three because S equal yes and point eight, we get point one six eight. Notice that we could have just as well omitted point three, but we left it there just that the sequel doesn't change. Normalizing so that the sum is one, where you get point zero five by the sum, which is point two four. That is we have 24% chance now that it's raining, Which is less than the earlier 42%. So, what's happened is that our belief in whether or not it had rained first changes, because we added a new possible cause, and then, further changes when we observe that new possible cause to actually be true. So, our belief gets revised because this belief propagates in the network changing our belief about R once again A final point, which is related to our hidden agenda again, is that the fact that we have multiple causes for a feature, or an observation. And observing one, changes our belief in the other possible cause is an example of what we do everyday. When we observe an event, we can explain away possible other causes, and in fact, doctors do this all the time in diagnosing patients and that's the example that we're gonna study in our next programming assignment. Explaining away is something that is a feature of human reasoning and is also exhibited by Bayesian networks. And therefore, the probabilistic reasoning, is in some sense, mirroring how we deal with uncertainty in the world. Two study based unit works that we have done in this class using sequel. It's the, one of the easiest ways to understand what is happening in these networks. At the same time, it's not the most efficient way to do inference in Bayesian networks. There are many other more efficient algorithms. Having said that, it's also true that the linkages between sequel and inference in Bayesian networks has not been studied as much as it should have been probably. There are very few papers on this, and probably no textbooks which talk about this way of explaining Bayesian networks. So you have to rely on the lecture notes alone for this, topic. There are other algorithms for inferring or performing influence in Bayesian networks. The junction tree algorithm for example is very well-known.. In some sense what that kind of algorithm does and what a sequel engine does inside to plan out the joints are very similar. In some sense, there is a deep relationship between how sequel optimization takes place and the junction tree algorithm. For those of you that want to explore further this is probably an interesting area for some research. Anyway, now we shall turn to some other applications of Bayesian networks and graphical networks in general, to learn about facts from text.