Hi, so far in this course we've discussed a wide variety of inductive arguments. First of all, we looked at statistical generalization from a sample. Then we looked at application of the generalization down to a particular case. Then we looked at inference to the best explanation, and argument from analogy. And last week we turned to causal reasoning, which involved the positive necessary and sufficient condition test. And, in addition, at the very end, we looked at concommitent or concommitent theoriation, which involved a kind of manipulation in drawing conclusions about what causes what. So we've seen a lot of kinds of induction, and that's gotta raise the question, what's in common to all the different types of induction that makes them induction? What we saw, the inductive arguments, unlike deductive arguments, are defeasible, and they don't even aim at being valid. Instead, they just try to be strong. But strength comes in different degrees, and we haven't really learned anything yet about, what exactly strength is? Well, it turns out that one way of understanding inductive strength is that an argument is strong in proportion to the probability that its conclusion is true, given its premises. So inductive strength can be understood in terms of probability. And so for this week, we're going to turn to the notion of probability, which will help us understand inductive arguments. But, probability is important in a lot of other ways too. I mean just think about going on a picnic tomorrow. You want to know what the probability of it raining is, because if the probability of rain is very high, you might not want to go on your picnic. Instead, stay inside. Okay, so probability affects our beliefs, and it affects our decisions, and that makes it important to understand probability, and try to figure it out. But that's a problem, because if you just rest on your intuitions about probability, you're going to get messed up, you're going to make mistakes. Almost everybody messes up probability until they start thinking about it a little bit more seriously. Let me give you a few examples where people get confused about probability. [SOUND] Seven, alright. [SOUND] Seven, alright. [SOUND] Another seven. Unbelievable. Cool. Three sevens in a row, and seven's a winning number. So, that's great but, you know, the odds of getting four sevens in a row are really slim, so I bet the next one's not going to be a seven. Right? No. Just because the first three were a seven, doesn't tell you the next one's not going to be. Sure, it's unlikely to get four in a row, but once you got the first three, then the likelihood of getting the fourth seven is just the same as the likelihood of getting a seven on any roll whatsoever. Some people make that mistake. They think that just because they got three in a row, it's not going to come up that way again the fourth time. And other people make the opposite mistake. They say, hey the dice are hot tonight. I got three sevens in a row. The next one I'm going to get a seven too. I'm going to put all my money on seven. Don't do it. You'll lose your money, because even if the first three are sevens, dice don't get hot. The odds of getting a seven on the fourth one are exactly the same after you've gotten three sevens, and as if you didn't get three sevens. And both of these mistakes. Three sevens, so the next one's not going to be seven. Three sevens, so the next one's going to be seven. They're both mistakes, because the odds on the fourth roll are not affected by the odds on the first three rolls. And people who think otherwise, they're just going to get fooled by gamblers. It's not just in gambling, too. It's also in basketball. People think, oh he's hot. He's made a lot of shots in a row. He's hot. He's going to make the next one. And then they'll tell that player, you take the shot, because you're hot tonight. Well that's a little more controversial in basketball, but some statistics suggest that people don't really have hot hands in basketball. They just make the percentage that they make. And you're going to get strings, but that doesn't mean you're hot in any way. Like I said, that's controversial, but with dice, at least we know that dice do not get hot, and so don't waste your money making that mistake. That's called the gambler's fallacy, and don't do it. Okay, deck of cards, and here I am. I'm going to shuffle the cards, okay, there we go. There we go, I'm going to shuffle the cards again. We've got these cards all shuffled and now I'm going to deal out two hands, okay. Nine, a queen, an eight, a queen. A five, a queen. A jack, a queen. A four, an ace. Wow, look at those hands. That's amazing. You think they're about equally probable? What's the likelihood of that hand coming up? Most people would say, that's pretty slim. It's unlikely that you're going to get a hand with four queens and an Ace. What about this hand? That's the kind of hand I get all the time when I play poker. So most people are going to think that kind of hand is a lot more likely, but actually, they're equally probable. Getting exactly this hand and exactly that hand have the same probability. The reason people think this one is less likely than this one, is that you get kind of hand a lot in poker, namely, a hand that is junk, it's useless. You get a winning hand like this, not very often in poker. And so people think that this one is less likely than this one because this hand is more a representative of the type of hand that I get when I'm playing, and that most people get when they're playing. But that's what's called the representativeness heuristic. You think it's more probable because it's more representative of the kind of thing that you encounter and experience. Another famous example of the representativeness heuristic comes from kind of Kahneman and. Tversky. It's about Linda the Bank Teller. They tested a large number of subjects. What they did is simply describe Linda. Linda is 31 years old, single, outspoken, and very bright. As a student, she majored in philosophy. She was deeply concerned with issues of discrimination and social justice, and sh also participated in anti nuclear demonstrations. And yes there's subjects to rank the following statements with respect to probability. You know, which one is more probable than which of the others. Linda is a teacher in an elementary school. Linda works in a book store, and takes yoga classes. Linda is active in the feminist movement. Linda is a psychiatric social worker. Linda is a bank teller. Linda is an insurance sales person, and Linda is a bank teller and is active in the feminist movement. And when people ranked all of these, they tended to put Linda as a bank teller and is active in the feminist movement as more likely than Linda is a banker teller. That can't be. I mean, just think about it. It's got to be less likely that she's a bank teller and active in the feminist movement, and simply that she's a bank teller, because in every possible state of affairs where she's a bank teller and active in the feminist movement, she's also a bank teller. So there have to be some possibilities where she's a bank teller, but not active in the feminist movement. So it has to be more likely that she's a bank teller than that she's both a bank teller and also active in the feminist movement. And why do people make this mistake so often? It's largely because, when they trust their intuitions about probability, those intuitions are based on what they take to be representative. Someone with a background like Linda, who is deeply concerned with issues of discrimination and social justice, are likely to be active in the feminist movement, whereas bank tellers are not typically known for being active in the feminist movement. So it would be more typical for her to be both than for her to just be a bank teller. And so they based their probability judgment on the representativeness heuristic, and that's what leads to the common mistake. One final common mistake about probability is illustrated by what has come to be known as The Monty Hall problem or sometimes the three door problem. Here's the setup, which comes from an old TV show called Let's Make A Deal. There are three doors on the stage, lets call them door A, door B and door C. And behind one of the doors is a car, and if you get the right door, you get to keep the car. But behind the other two doors is a goat, and if you get those doors, you go home with a goat. Now, we're assuming that you'd rather have a car than a goat. You might be one of those weird peoples, but let's assume you want a car and not a goat. Imagine that you picked door A, and then the host, Monty Hall opens door C, and reveals behind door C a goat. Now you know that there's door A and door B left. One of them has a car, one of them has a goat. And Monty Hall turns to you and he offers you the chance to switch from door A, the one you picked, to door B. And the question is, should you switch? And this has actually caused a lot of controversy, because some people think you should switch, some people think you shouldn't switch. I'm not going to tell you, because that's going to be one of our exercises. And the point of this lecture is just to show you that you can't trust your intuitions. Because so many people will get it wrong in the Monty Hall case, in the representativeness heuristic, and so many people commit the gambler's fallacy that you need to take probability seriously, and in the next lecture we'll start looking at some definitions of what it is, and the lectures after that we'll look at rules for probability, because you need that kind of thing if you can't trust your intuitions.