The rule for the probability of conjunctions is a little more complicated. One reason is that, there're really two cases we have to keep separate. We're going to look first at the case of independent events, and we have to begin by saying what it means to call them independent. To call them independent is simply to say that the probability of one does not affect the probability of the other. So, if you flip a coin or roll a die, and get a certain number, and then you pick up the coin or the die, and you flip it or roll it again, then the second result is not affected by the first result. So, those are independent events, and with cards, if you take a deck of cards, and you pick out a card, two of spades. Then you take the two of spades, and you put the pack in the deck, you shuffle, it's going to be independent of whether you get it again, because what you picked up the first tine doesn't affect what you pick up the second time. But, take a deck of cards and you pull out the king of spades, and you take that card, and you throw it away, and don't put it back in the deck, then it affects all the probabilities of the remaining cards, because now you have fewer cards than you had before. So that's the difference between independent events and dependent events. So we're going to look at independent events first, and look at the rule for the conjunction of independent events. And the rule for, for this case is pretty simple. If two events are independent, then the probability that both events will occur in that order, because we're talking about this one and then that one, we'll talk a little bit more about that later, but we're assuming that. The product that they both will occur is the product of the probability of the first event times the probability of the second event. And notice that the both occur is tied to the product, or multiplying the two probabilities together. And the reason for that is that it's less likely that they will both to occur, than that one of them will occur separately. And when you multiply the two probabilities between zero and one, you're going to get a lower probability that they both occur, than for either one of them occurring, you know, all by itself. We can restate this rule symbolically using the symbols that we used before for the probability of negation. Namely, the probability of hypothesis 1 and hypothesis 2 both being true is the probability of hypothesis 1 times the probability of hypothesis 2. Now, let's apply it to some cases. Suppose you flip a coin, and you get heads? And you flip another coin, and you get heads again? What's the probability of getting heads twice at two flips of a coin? Well the probability is 5. for the first flip of getting heads. It's 5. for the second flip of getting heads and then the probability of getting both is going to be 5. times 5,. or 25.. So the probability of getting two heads in a row is 25. on two flips of a fair coin. A little more complicated example uses cards. Here we have a standard deck of cards. Ace, 2, 3, 4, 5, 6, 7, 8, 9, 10, Jack, Queen, King. So there are 13 different types of cards, and there are 4 different suits, spades, hearts, diamonds and clubs. So we can remember that. The probability of having one of these types of cards chosen, assuming that they're shuffled properly, is one out of thirteen, and the probability of getting a particular suit, say a club, is one out of four, because there are 13 clubs out of 52 cards. Clubs are one suit out of four, so the probability of picking a club is one in four. and the probability of picking a particular card, say a 3, is one in thirteen. Again, a deck of cards. Suppose I pull out a 4, and then I pull out a club. Now in a standard deck of cards, there's thirteen cards, we're assuming no jokers. So the odds of getting a four are one in thirteen. There're four suits, spades, hearts, diamonds and clubs. So the odds of picking a card that's a club is one in four. What's the probability of picking a four, and then picking a club? Well it's going to be 1 in 13 times 1 in 4 And that means, this can be 1 in 52. So the odds of that combination occurring, right, in that order, is going to be, 1 over 13 times 1 over 4 equals 1 in 52. That example shows you how to calculate the probability of a conjunction with independent events, but what if the events are not independent? That is, what if the probability of one effects the probability of the other? Here is a question. Joe can run a mile in less than five minutes, about half the time. When he runs and he's timed, half of the time he's under five minutes, half of the time he's over five minutes. So what's the probability he can run a mile in less than five minutes on a given occasion? .5. But, what's the probability he can run one mile in under five minutes, and then run another mile in under five minutes right after that? Well, if you did this calculation you will have 5. times 5.. And that means 25,. but you know that's not right, because he's going to be dead tired after the first mile. There's no way that the probability of running the second mile in under five minutes is just as high, whether or not he would have just finished running a mile. So those events are not independent, and that example shows you why you need a new formula to calculate the probability of a conjunction, when events are not independent, but instead their probabilities depend on each other. The same distinction between independence and dependence applies to our old friend, cards. So just to simplify matters let's start with a very limited deck. It's just got four aces It's got the ace of spades, the ace of hearts, the ace of diamonds, and the ace of clubs. And notice that the ace of hearts and the ace of diamonds are red, and the ace of spades and the ace of clubs are black. So, knowing that, out of these cards, if we shuffle them, and don't look at them, what's the probability of picking a red ace? Well, 1 in 2, right, because 2 of them are red out of 4. Now. Sorry, Oh, a red one, I've picked the red one. Now, let's put it back in the deck and shuffle them together, okay. What's the probability now of picking another red Ace? Again 1 in 2. So, if you look at the possibilities all laid down, the first pick is in the columns, right? So the leftmost column is if you pick an ace of spades on your first pick, second column is ace of hearts on your first pick. Third column is ace of diamonds on your first pick, fourth column is ace of clubs on your first pick. And then your second pick is this rows, and again it's either spades, hearts, diamonds or clubs. So there are 16 possible combinations of picks here, in order, and how many of them are picks where you have two red aces? Well, this is one, this is one, this is one, and this is one. So, the 4 possibilities in the middle are the possibilities where you've picked an ace that's red, both on your first pick and also on your second pick. So the odds of doing that are 4 out of 16 or one out of four, or 25,. just like we calculated using the formula. But, what if there's no independence? Let's start with the same four aces. Okay, and what's the probability of picking a red ace? Still one in two. So let's suppose I pick the ace of hearts, throw it away, do not put it back in the deck. And now I've only got three aces left. What's the probability of picking a second red ace without looking? Well it's going to be one in three, because one of them is red and the other two are not. There are three cards left, one of them is a red ace. We can also look at it this way. There are sixteen possibilities. How many of these include a red ace on the first pick and also a red ace in the second pick? Well, those are the four in the middle, but if we got rid of the ace of hearts by throwing it out and not putting it back in the deck, then, we've left out this whole row. And we know that we are in this column because we picked an ace of hearts the first time. And only one out of the three is picking a red ace the second time, so we know that there's one out of three. So when it's dependent like this, because you don't put the card back, you need a different formula. The probability is one out of two times one out of three equals one out of six, instead of the one out of four that we had when they were independent and you put the card back. So we need a different formula to generalize for this case where there is dependence, okay. And this formula's going to use the conditional probability, which is just a fancy way of saying what I've just mentioned to you. Right? The conditional probability of picking a red ace on the second pick given that I pick a red ace on the first pick and threw it away, is one in three. It's how many times does x occur picking a red ace on the second pick out of the cases where I pick a red ace on the first pick and threw it away and that's going to be your one in three. So that's the conditional probability, and now the formula. The probability of both of two of events occurring is the product of the probability of the first event occurring times the conditional probability of the second event occurring given that the first event occurred. That's where you use the conditional probability. Now notice, when the events are independent by conditional probability of the second event occurring, given that the first event occurred, is just going to be the same as the probability of the second event occurring, because the first event doesn't affect the second event. So that first formula for an independence is just an instance of this formula, where the conditional probability of the second, given the first, is identical with the probability of the second. So that's why it's a generalized formula and actually includes the first formula, but when they're independent it's just simpler to think of the first formula alone. We could also symbolize this rule. We can say that the probability of hypothesis 1 and hypothesis 2 is equal to the probability of hypothesis 1 times the probability of hypothesis 2, given hypothesis 1, and that straight up and down line in this formula is what indicates conditional probability. This little formula here means the probability of hypothesis 2 given hypothesis 1. So you can use this rule to calculate the probability of any old conjunction, whether they're independent or not, but when they are independent, you might want to use that simple rule, which is the first version of probability of conjunction.