1 00:00:00,000 --> 00:00:05,899 We have our same tables as before. But now we have, not only the evidence 2 00:00:05,899 --> 00:00:10,160 that the grass is wet, but also the sprinkler was on. 3 00:00:10,160 --> 00:00:13,847 Now, let's see how the probabilities work out. 4 00:00:13,847 --> 00:00:20,238 We have the probability of R, given W, and S times the probability of W and S is the 5 00:00:20,238 --> 00:00:23,638 joint. We take the probability of WS to the other 6 00:00:23,638 --> 00:00:28,780 side by calling its inverse sigma as before, and then we have the remain, the 7 00:00:28,780 --> 00:00:33,854 other expansion for the joint, with the, the probability of W given RS, times 8 00:00:33,854 --> 00:00:41,600 probability of S, times probability of R. Notice that the property of R and S is 9 00:00:41,900 --> 00:00:47,464 inwarded inside the sigma and we also have probability of S over here. In practice, 10 00:00:47,464 --> 00:00:52,760 they are cancel out, but for our purposes it's instructed to leave them as they are. 11 00:00:52,760 --> 00:00:59,654 Equation is perfectly valid as it is. And executing it in sequel simply by doing 12 00:00:59,654 --> 00:01:03,877 the joint with the restriction that W, N, S are yes, 13 00:01:03,877 --> 00:01:10,600 Gives us select R Sum of the products of the all the pieace from all the tables, 14 00:01:10,600 --> 00:01:15,943 Where W is yes, S is yes. And now the only common variable is R, 15 00:01:15,943 --> 00:01:22,426 since S is yes, is being restricted. And then we group by R. 16 00:01:22,426 --> 00:01:29,747 The results, since we're taking the case where S and R are both, or S and W are 17 00:01:29,747 --> 00:01:37,828 both yes, we get nine, times point three times point two as the only entry for R 18 00:01:37,828 --> 00:01:42,867 equal TS.. Nothing to sum up because the other pieces 19 00:01:42,867 --> 00:01:48,002 where s equal to no were left out, And which is point zero five four and 20 00:01:48,002 --> 00:01:53,932 similarly point seven times point three. Again, point three because S equal yes and 21 00:01:53,932 --> 00:01:59,527 point eight, we get point one six eight. Notice that we could have just as well 22 00:01:59,527 --> 00:02:06,800 omitted point three, but we left it there just that the sequel doesn't change. 23 00:02:07,280 --> 00:02:12,950 Normalizing so that the sum is one, where you get point zero five by the sum, which 24 00:02:12,950 --> 00:02:17,890 is point two four. That is we have 24% chance now that it's raining, 25 00:02:17,890 --> 00:02:23,919 Which is less than the earlier 42%. So, what's happened is that our belief in 26 00:02:23,919 --> 00:02:29,949 whether or not it had rained first changes, because we added a new possible 27 00:02:29,949 --> 00:02:35,898 cause, and then, further changes when we observe that new possible cause to 28 00:02:35,898 --> 00:02:40,481 actually be true. So, our belief gets revised because this 29 00:02:40,481 --> 00:02:47,078 belief propagates in the network changing our belief about R once again A final 30 00:02:47,078 --> 00:02:54,653 point, which is related to our hidden agenda again, is that the fact that we 31 00:02:54,653 --> 00:03:00,310 have multiple causes for a feature, or an observation. 32 00:03:00,310 --> 00:03:07,054 And observing one, changes our belief in the other possible cause is an example of 33 00:03:07,054 --> 00:03:12,072 what we do everyday. When we observe an event, we can explain 34 00:03:12,072 --> 00:03:18,817 away possible other causes, and in fact, doctors do this all the time in diagnosing 35 00:03:18,817 --> 00:03:25,315 patients and that's the example that we're gonna study in our next programming 36 00:03:25,315 --> 00:03:31,000 assignment. Explaining away is something that is a 37 00:03:31,000 --> 00:03:37,428 feature of human reasoning and is also exhibited by Bayesian networks. 38 00:03:37,428 --> 00:03:44,130 And therefore, the probabilistic reasoning, is in some sense, mirroring how 39 00:03:44,130 --> 00:03:52,015 we deal with uncertainty in the world. Two study based unit works that we have 40 00:03:52,015 --> 00:03:56,392 done in this class using sequel. It's the, one of the easiest ways to 41 00:03:56,392 --> 00:03:59,418 understand what is happening in these networks. 42 00:03:59,418 --> 00:04:04,375 At the same time, it's not the most efficient way to do inference in Bayesian 43 00:04:04,375 --> 00:04:07,271 networks. There are many other more efficient 44 00:04:07,271 --> 00:04:11,451 algorithms. Having said that, it's also true that the 45 00:04:11,451 --> 00:04:17,710 linkages between sequel and inference in Bayesian networks has not been studied as 46 00:04:17,710 --> 00:04:23,365 much as it should have been probably. There are very few papers on this, and 47 00:04:23,365 --> 00:04:29,398 probably no textbooks which talk about this way of explaining Bayesian networks. 48 00:04:29,398 --> 00:04:34,300 So you have to rely on the lecture notes alone for this, topic. 49 00:04:35,400 --> 00:04:42,054 There are other algorithms for inferring or performing influence in Bayesian 50 00:04:42,054 --> 00:04:46,379 networks. The junction tree algorithm for example is 51 00:04:46,379 --> 00:04:50,687 very well-known.. In some sense what that kind of algorithm 52 00:04:50,687 --> 00:04:56,145 does and what a sequel engine does inside to plan out the joints are very similar. 53 00:04:56,145 --> 00:05:00,540 In some sense, there is a deep relationship between how sequel 54 00:05:00,540 --> 00:05:04,580 optimization takes place and the junction tree algorithm. 55 00:05:04,580 --> 00:05:10,109 For those of you that want to explore further this is probably an interesting 56 00:05:10,109 --> 00:05:14,649 area for some research. Anyway, now we shall turn to some other 57 00:05:14,649 --> 00:05:20,618 applications of Bayesian networks and graphical networks in general, to learn 58 00:05:20,618 --> 00:05:22,260 about facts from text.