Now that we've seen which features distinguish inductive arguments from deductive arguments, we want to look at the different kinds of inductive arguments one by one. And the first two we're going to look at are moving up from a sample to a generalization, and then moving back down from the generalization to some kind of prediction in a particular case. Generalizations are all around us. Almost all movies have credits. Most popular bands have drummers. Many restaurants are closed on Mondays. three quarters of police officers are very nice people. Two thirds of books have chapters in them. Half of the people I know like to play sports. And my favorite of all, 87.2%. The statistics are made up on the spot. It should come as no surprise at all that there's so many generalizations filling our lives. Because generalizations can be extremely useful, especially when you need to make a decision. So, for example, if you're feeling a little nauseous, that is a little sick at your stomach, then you need to know whether most people who have symptoms of your sort have something serious enough that they need to go to a doctor. And they also need to know whether doctors can usually help people who have symptoms of that sort. All of those generalizations are relevant to whether or not you want to go to the doctor to see if you can get some help with your sick stomach. But then the problem is, how can we decide which generalizations to believe? Since there generalizations, and you pile all instances of certain sort, you can't check them all out. You have to take a sample of some sort. I mean, just imagine that your running a bakery. And, your making jelly doughnuts and you want to know whether the jelly doughnuts are filled with the right amount of jelly. You can make hundreds of jelly donuts. You're not going to tear them all apart and check every one for how much jelly they have in the middle, because then you'd have no donuts left to sell to your customers. Or imagine, that you want to buy a car. This time you're the customer. You want to buy a car and you need to know how often these cars had problems in the first year. Because if they have problems a lot during the first year, you don't want to buy that kind of car. But you don't want every car of that sort to been tested for the first year because then you can't buy a new car, you're only going to have a used car because they've all been used in the test. Or, imagine that you want to know what types of trees grew during a certain period of history. So, you look at the soil and you dig down, and you start looking for how many pollen grains there are, and a certain level that indicates a certain age. Well, you can't check every spot in the field because if you did that, you'd just be destroying the entire field. So, in these cases and many other cases, you just have to take samples. You can't test the whole class and then you have to generalize from the sample to the larger class. All of these generalizations from samples share a certain form. First, we look at one instance of the sample. We say that first F is G. That is the thing fats in the, fits in the class F, and has the property G. And we check another. The second F is also G, and the third F is also G. And all the rest of the Fs in the sample are G. So, we conclude that all Fs are G. Now notice that all in the conclusion means everything in the class of Fs. It doesn't only mean the ones in our sample. So we've started from premises that are only about the sample, which is only a part of the general class. And we've reached a conclusion about the whole class when we say that all Fs are G. Now, other arguments of the same general type are a little bit different. Because you don't always get complete uniformity. You don't always get all Fs or G. Sometimes you get the first F is G, and the second F is G, but the third F is not G. And the fourth F is G, and the fifth F is G, but the sixth F is not G. So you get, like, two-thirds of the Fs are G. And then, you reach the conclusion that two-thirds of all Fs are G. Again though, you've only looked at the Fs in the sample. You've only observed a small part of that total class of all Fs and you draw a conclusion that two-thirds of overall class, the whole class of Fs has that property, G. So, that's why you have a general, that's why it's called a generalization. You start from a smaller sample, and reach a conclusion about a much larger class. You generalize to the whole class. The fact that the argument moves from a small part of the class to the whole class shows that it's inductive. And what does that mean? Well first, is the argument valid? Think back to one of the examples. Almost all bands that I know of have drummers. Therefore, almost all bands have drummers. Well, is it possible that the premises are true and the conclusion false? Of course it is. Because maybe I only listen to bands that have drummers, but there are a lot of bands out there that don't have drummers, and I just didn't happen to listen to them. That is, they weren't part of my sample, okay? And is the argument defeasible? That means, that you can add further information to the premises and it'll make the argument much less strong. It might even undermine the argument totally. And of course, it could because you could give me all kinds of examples of bands that don't have drummers. And then, I would have to change my conclusion that almost all bands have drummers. So, this argument is defeasible. Does that mean that it can't be strong? No. It could still be strong. It could come in different types of strength. How would you get a stronger argument or a weaker argument in this case, while you can have a larger or smaller sample. If I take a very large sample, the argument's going to be stronger. If I take a very small sample, the argument's going to be weaker. So it's not like validity, which is on or off. It's either valid or not. Instead, the strength of the argument comes in degrees depending on how big the sample is. Now, does that mean that since it can't be valid, it can only be strong to a certain degree that it's no good? No. Because it's an inductive argument. Inductive arguments don't even try to be valid, they don't even pretend to be valid. That's not what they're supposed to be. And sob you can't criticize this argument by saying it's not valid. It's doing everything that it's supposed to do if it provides you with a strong reason, and a strong enough reason for the conclusion. That's what an inductive argument is supposed to do so if this argument does it, [SOUND] it's a good argument. Then, even if inductive arguments, including generalizations from samples, don't have to be valid. Tthey still do need to be strong. And so, we need to figure out how to tell when a generalization from a sample is a strong argument, when it provides a strong reason for the conclusion. And to really answer that question, you got to turn to statistics an area of mathematics. And learn how to analyze the data much more carefully than we'll be able to do here. But even mentioning the name statistics raises fear in some people. Mark Twain is famous for having said, there are three kinds of lies. There are lies, there are damn lies, and there's statistics. He actually gives credit to Israeli for having said it first. But the point is that statistics, are even worse than damn lies, you can't trust them at all, or so Mark Twain says. But on the other hand, some people say, statistics, that's math. It's all about numbers, you can't question that. Here's one example of somebody who suggests such a position. There's still one thing that's irrefutable, and that is the numbers. Numbers don't lie. You can't parse pull numbers. They're infallible. Okay, fine. He knows that statistics aren't infallible. He's just being sarcastic. But there are many people who put a lot of faith in the numbers and in statistics. They seem to think that you always have to trust it when a statistician comes up with an answer. And we're going to learn that neither of these views is correct. It's not true that statistics are always worse than damn lies, and it's also not true that the numbers are irrefutable. The truth lies somewhere in the middle.