1 00:00:00,000 --> 00:00:03,461 Welcome to this first course in the "Design and Analysis of Algorithms". I imagine 2 00:00:03,461 --> 00:00:06,838 many of you are already clear on your reasons for taking this course, but let me 3 00:00:06,838 --> 00:00:10,047 begin by justifying the course's existence and giving you several different 4 00:00:10,047 --> 00:00:13,598 motivations for learning about algorithms. So what is an algorithm? Well, we're not 5 00:00:13,598 --> 00:00:17,020 going to be needed a precise definition in this course, but essentially, an 6 00:00:17,020 --> 00:00:20,760 algorithm is a well defined set of rules, a recipe, in effect, for solving some kind 7 00:00:20,760 --> 00:00:24,500 of computational problem. So for example, maybe you're given a bunch of numbers and 8 00:00:24,500 --> 00:00:28,104 you want to rearrange them into sorted order. Maybe you're given a road network 9 00:00:28,104 --> 00:00:31,753 with an origin and a destination, and you want to compute the shortest path from 10 00:00:31,753 --> 00:00:35,539 point A to point B. Maybe you're given a bunch of tasks with deadlines, and you want 11 00:00:35,539 --> 00:00:39,052 to know whether or not it's feasible to accomplish all of those tasks by the 12 00:00:39,052 --> 00:00:42,135 respective deadlines. So, why study algorithms? Well first of all, 13 00:00:42,135 --> 00:00:46,385 understanding the field of algorithms and also the related field of data structures, 14 00:00:46,385 --> 00:00:50,331 is crucial for doing serious work in pretty much any other branch of computer 15 00:00:50,331 --> 00:00:54,277 science. That's the precise reason why, here at Stanford University, this is the 16 00:00:54,277 --> 00:00:58,072 course that's required for all of the degrees that the computer science 17 00:00:58,072 --> 00:01:02,120 department grants. Be it a bachelor's, a master's, or Ph.D degree's, we insist that you 18 00:01:02,120 --> 00:01:06,370 have mastery of the field of algorithms. So what are some examples of uses in the 19 00:01:06,370 --> 00:01:10,367 rest of computer science. Well, if you're doing routing in a communication network, 20 00:01:10,367 --> 00:01:14,607 that piggybacks on classical shortest path algorithms. The effectiveness of public-key 21 00:01:14,607 --> 00:01:18,739 cryptography really rests on that of number theoretic algorithms. In, say, 22 00:01:18,739 --> 00:01:23,250 computer graphics, you need the computational primitives that are supplied by the study 23 00:01:23,250 --> 00:01:26,947 of geometric algorithms. Database indices rely on balanced search-tree 24 00:01:26,947 --> 00:01:30,272 data structures as covered in this course. Computational biology uses 25 00:01:30,272 --> 00:01:35,644 dynamic programming algorithms to measure similarity among genomes. And the list 26 00:01:35,644 --> 00:01:40,369 goes on and on. A second reason to study algorithms is that they play a key role in 27 00:01:40,369 --> 00:01:43,829 modern technological innovation. Obviously, I could give any number of 28 00:01:43,829 --> 00:01:47,940 examples here, and let me just state one super obvious one, which is that search 29 00:01:47,940 --> 00:01:51,801 engines use a tapestry of algorithms to efficiently compute the relevance of 30 00:01:51,801 --> 00:01:55,762 various webpages. The most famous such algorithm which you may have heard of is 31 00:01:55,762 --> 00:01:59,952 the PageRank algorithm, in use by Google, indeed. In a December 2010 report to the 32 00:01:59,952 --> 00:02:03,826 United States White House, the President Council of Advisors on Science and 33 00:02:03,826 --> 00:02:07,752 Technology argued that, "in many areas, performance gains due to improvements in 34 00:02:07,752 --> 00:02:11,575 algorithms have vastly exceeded even the dramatic performance gains due to 35 00:02:11,575 --> 00:02:15,756 increased processor speed", as you'd be familiar with in the form of Moore's 36 00:02:15,756 --> 00:02:19,312 Law. Third, although this is getting significantly outside the scope of this 37 00:02:19,312 --> 00:02:23,279 course, algorithms are increasingly being used to provide a novel lens on processes 38 00:02:23,279 --> 00:02:26,490 outside of computer science and technology. For example, the study of 39 00:02:26,490 --> 00:02:30,220 quantum computation has provided a new and computational view point on quantum 40 00:02:30,220 --> 00:02:33,998 mechanics. Price fluctuations in economic markets can be fruitfully viewed as an 41 00:02:33,998 --> 00:02:37,351 algorithmic process. And even evolution can be usefully thought of as a 42 00:02:37,351 --> 00:02:41,919 surprisingly effective search algorithm. The last two reasons I'm gonna give you 43 00:02:41,919 --> 00:02:45,957 might sound a little flippant, but, I think there's more than a grain of truth 44 00:02:45,957 --> 00:02:49,794 to both of them. Now, I don't know about you, but back when I was a student, my 45 00:02:49,794 --> 00:02:53,782 favorite classes were always challenging classes. But, after I struggled through 46 00:02:53,782 --> 00:02:57,922 them I somehow felt like I had a few more IQ points than when I started. So, I hope 47 00:02:57,922 --> 00:03:01,910 this course provides a similar experience for many of you: that, on the one hand, 48 00:03:01,910 --> 00:03:05,898 it's a bit of a struggle -- you find the concepts challenging, but perhaps you 49 00:03:05,898 --> 00:03:09,881 feel just a, a tinge smarter after we're done. Finally, I hope that by the end of 50 00:03:09,881 --> 00:03:13,656 the course, a constant fraction of you will agree with me that designing and 51 00:03:15,770 --> 00:03:17,833 analyzing algorithms is simply fun. It's an endeavor that requires a rare blend of 52 00:03:17,833 --> 00:03:21,558 creativity and precision. And it can certainly be frustrating at times. But 53 00:03:21,558 --> 00:03:25,282 even more than that, it is addictive. So let's now descend from these lofty 54 00:03:25,282 --> 00:03:29,409 generalities and get much more concrete. And also, let's remember that we've all 55 00:03:29,409 --> 00:03:34,994 been learning and using algorithms since we were little kids. So once upon a time, 56 00:03:34,994 --> 00:03:39,129 in roughly 3rd grade or so, you learned how to multiple two numbers. Now you 57 00:03:39,129 --> 00:03:43,479 probably weren't thinking in these terms at the time, but multiplying two numbers 58 00:03:43,479 --> 00:03:47,829 is certainly a well defined computational problem, and that procedure you learned 59 00:03:47,829 --> 00:03:52,179 back in 3rd grade or so is indeed an algorithm. So let's just make that a little 60 00:03:52,179 --> 00:03:56,476 bit more precise. In this computational problem we're given as input two numbers, 61 00:03:56,476 --> 00:04:01,059 let's say, with N digits. And to make things interesting, why don't you think 62 00:04:01,059 --> 00:04:06,011 about n as being really quite large, say, in the thousands. Maybe we're implementing 63 00:04:06,011 --> 00:04:11,023 an algorithm that's going to be used in a cryptography application, where you need to 64 00:04:11,023 --> 00:04:15,975 keep track of really quite large numbers. So if we call the two infinite numbers X 65 00:04:15,975 --> 00:04:20,424 and Y, the problem is simply to compute their products X times Y. So a quick 66 00:04:20,424 --> 00:04:24,622 digression, I'll certainly be the first to admit that my handwriting is not the 67 00:04:24,622 --> 00:04:29,193 greatest. I got a C in penmanship back in elementary school, and I think the teacher 68 00:04:29,193 --> 00:04:33,444 was being, a little generous. But, you know, it's an acquired taste but trust me, 69 00:04:33,444 --> 00:04:37,721 you will get used to it. Okay, back to integer multiplication. Now, when we talk 70 00:04:37,721 --> 00:04:41,688 about procedures for multiplying two numbers, we're gonna be interested in 71 00:04:41,688 --> 00:04:46,084 counting how many steps are required in order to execute, the multiplication. So 72 00:04:46,084 --> 00:04:50,372 how do we count a step? We'll talk more about this later, but for multiplying two 73 00:04:50,372 --> 00:04:54,714 numbers, let's just call a step the addition or multiplication of two single-digit 74 00:04:54,714 --> 00:04:58,842 numbers. So let's review the integer multiplication algorithm that we learned 75 00:04:58,842 --> 00:05:02,969 back in grade school just by working through a concrete example. Specifically, 76 00:05:02,969 --> 00:05:07,472 let's think of n = 4, so let's look at two 4-digit numbers. Let's say 5, 77 00:05:07,472 --> 00:05:14,569 6, 7, 8. And 1, 2, 3, 4. [sound of door closing] Now as you'll recall, the 78 00:05:14,569 --> 00:05:18,693 procedure we learned way back when was just to take each digit at the bottom 79 00:05:18,693 --> 00:05:23,300 number, and multiply it by each of the top numbers. And then to take each of those, N 80 00:05:23,300 --> 00:05:27,371 partial products, and add them up. So, for example, you start with the 4, you 81 00:05:27,371 --> 00:05:31,603 multiple it by 8, you get 32, carry the 3. 4 times 7 is 28, add the 82 00:05:31,603 --> 00:05:35,758 3, you get 1. Carry the 3, and so on. So that gives you this first 83 00:05:35,758 --> 00:05:41,482 partial product, 22712. Then you do a shift, so you effectively put a 0 84 00:05:41,482 --> 00:05:47,289 in this final digit, and you repeat that procedure using the next digit, the 3. 85 00:05:47,289 --> 00:05:53,672 So again, 3 times 8 is 4, carry the 2, and so on. And you can compute the 86 00:05:53,672 --> 00:05:59,675 final two partial products using the 2 and the 1. And having computed all of 87 00:05:59,675 --> 00:06:04,226 the partial products, you just add them up to get the final product. Now, the 88 00:06:04,226 --> 00:06:08,952 question I'm interested in, is: how much work, how many primitive operations did we 89 00:06:08,952 --> 00:06:13,620 do to multiply these two numbers and, more generally, how many does it require to 90 00:06:13,620 --> 00:06:18,463 multiply to N-digit numbers as a function of N? Well, just to get sort of a ballpark 91 00:06:18,463 --> 00:06:23,189 viewpoint for what's going on, we started with two N-digit numbers. And at the end 92 00:06:23,189 --> 00:06:27,586 of the day, we basically filled out a grid of size roughly N by N, give-and- 93 00:06:27,586 --> 00:06:31,804 take a little bit. So just in ballpark terms, it seems that multiplying two N-digit 94 00:06:31,804 --> 00:06:36,239 numbers require essentially a quadratic number of operations, a small number of 95 00:06:36,239 --> 00:06:40,511 operations to fill each entry in this grid. The grid is N by N roughly. So 96 00:06:40,511 --> 00:06:45,053 that's roughly N^2 operations. And a little more detail, we can look at each of 97 00:06:45,053 --> 00:06:49,630 the partial product separately. So, to compute say this first partial product, 98 00:06:49,630 --> 00:06:54,528 what did we do? We took the four, we multiplied it by each of the N digits on 99 00:06:54,528 --> 00:06:59,426 top. We also did some carries, so that affects things a little bit. But in the 100 00:06:59,426 --> 00:07:04,710 end, we did somewhere between say N and 2N, primitive operations to compute this 101 00:07:04,710 --> 00:07:10,060 first partial product. And that's true, of course, for each of the N partial products. 102 00:07:10,060 --> 00:07:15,503 So that's roughly N operations for each of the N partial products. So that's roughly 103 00:07:15,503 --> 00:07:20,818 N^2 operations to compute all of the partial products. Then we have to add it all 104 00:07:20,818 --> 00:07:26,003 up at the end, but that takes just an extra number of constant times n primitive 105 00:07:26,003 --> 00:07:30,798 operations to do all of those additions. So summarizing, overall, the number of 106 00:07:30,798 --> 00:07:36,307 primitive operations required to multiply two N-digit numbers using this procedure 107 00:07:36,307 --> 00:07:43,040 grows quadratically with the length N. So, for example, if you double the length 108 00:07:43,040 --> 00:07:47,760 of two numbers, you expect to do roughly four times as many primitive operations, 109 00:07:47,760 --> 00:07:53,157 if you use this iterative algorithm for multiplying two numbers. Now, depending on 110 00:07:53,157 --> 00:07:57,293 what type of 3rd grader you were, you might well have accepted this procedure as 111 00:07:57,293 --> 00:08:01,278 being the unique, or at least the optimal way of multiplying to N-digit numbers 112 00:08:01,278 --> 00:08:05,313 together. And if you wanna be an exponent algorithm designer, that kind of obedient 113 00:08:05,313 --> 00:08:10,481 attitude is something you're gonna have to learn to discard. Here's a favorite quote 114 00:08:10,481 --> 00:08:15,280 of mine. It's from a quite early textbook in the field, The Design and Analysis of 115 00:08:15,280 --> 00:08:20,138 Computer Algorithms by Aho, Hopcroft and Ullman. And after highlighting a number of 116 00:08:20,138 --> 00:08:24,344 algorithmic design techniques, they conclude by saying, "perhaps the most 117 00:08:24,344 --> 00:08:29,202 important principle for the good algorithm designer is to refuse to be content". So 118 00:08:29,202 --> 00:08:33,941 that's a beautifully accurate quote. I might rephrase it more succinctly by just 119 00:08:33,941 --> 00:08:38,799 saying, "if you want to be a good algorithm designer, you should adopt the following 120 00:08:38,799 --> 00:08:43,120 mantra: You should always be asking "can we do better?"". This is a particularly 121 00:08:43,120 --> 00:08:47,487 appropriate question to ask when you're faced with some kind of naive or obvious 122 00:08:47,487 --> 00:08:51,477 algorithm for solving a problem, like the 3rd grade algorithm for integer 123 00:08:51,477 --> 00:08:55,898 multiplication. This will come up over and over again. We'll see an algorithm, there 124 00:08:55,898 --> 00:09:00,373 will be an obvious solution but, with some extra algorithmic ingenuity, by detecting 125 00:09:00,373 --> 00:09:04,686 subtle structure in the problem, we'll be able to do significantly qualitatively 126 00:09:04,686 --> 00:09:10,271 better than the naive or obvious solution to the problem. So let's apply this cocky 127 00:09:10,271 --> 00:09:14,507 attitude to the problem of multiplying two integers. Let's just suppose, as a working 128 00:09:14,507 --> 00:09:18,643 hypothesis, that there is some procedure which is better than what we learned back 129 00:09:18,643 --> 00:09:22,325 in elementary school. What could it possibly look like? Well, as someone 130 00:09:22,325 --> 00:09:26,108 with some programming experience, you know that there are not only iterative 131 00:09:26,108 --> 00:09:30,092 algorithms, iterative programs, like the one we just outlined for integer multiplication. But there are 132 00:09:30,092 --> 00:09:34,026 also recursive programs. These are programs that call themselves 133 00:09:34,026 --> 00:09:37,690 typically, on an input of a similar form but with smaller size. 134 00:09:38,663 --> 00:09:41,695 So as a working hypothesis, let's imagine that there's 135 00:09:41,695 --> 00:09:46,559 some interesting recursive approach to multiplying two integers. What must such 136 00:09:46,559 --> 00:09:50,511 an algorithm look like? Well it must somehow fundamentally involve calling 137 00:09:50,511 --> 00:09:55,143 itself on smaller problems, that is, on smaller numbers, numbers with fewer 138 00:09:55,143 --> 00:09:59,527 digits. So what kind of decomposition on numbers could be used to 139 00:09:59,527 --> 00:10:03,215 enable this kind of recursive approach. Well, if you think about it, there's a 140 00:10:03,215 --> 00:10:07,632 pretty natural way to break a number with a bunch of digits into a couple numbers 141 00:10:07,632 --> 00:10:11,663 with fewer digits. Namely, just take the first half of the digits, the first N/2 142 00:10:11,663 --> 00:10:14,719 digits, regard that as a number in its own array. And the second half of the 143 00:10:14,719 --> 00:10:19,312 digits, regard that as a number in its own array. So perhaps this decomposition will 144 00:10:19,312 --> 00:10:24,196 have some use in enabling a recursive attack on how to multiply two integers. So 145 00:10:24,196 --> 00:10:33,060 we're gonna investigate that in more detail on this next slide. Given X and Y, 146 00:10:33,060 --> 00:10:39,925 each with N/2 digits, we're going to represent X as terms of its first half 147 00:10:39,925 --> 00:10:45,028 of the digits A, and its second half of the digits, B. Similarly, we will write Y 148 00:10:45,028 --> 00:10:54,597 the second input, in terms of its first half and second half of digits, C and D. 149 00:10:54,597 --> 00:11:00,568 And in this expansion, A, B, C, and D are all entered with 2-digit numbers. I'm 150 00:11:00,568 --> 00:11:04,825 just assuming for convenience here that N is even. This would extend to the case 151 00:11:04,825 --> 00:11:08,730 where N is odd in the natural way where you break it into N/2 rounded up 152 00:11:08,730 --> 00:11:18,724 and N/2 rounded down. So in our previous example, where X was 5678 and Y 153 00:11:18,724 --> 00:11:26,050 was 1234, we would have A being equal to 56, the first two digits of the four in X, 154 00:11:26,050 --> 00:11:31,851 and B would be the other two digits. And similarly for C and D in a decomposition 155 00:11:31,851 --> 00:11:42,019 of Y. So now in just purely mechanical way, we can expand the product of X times Y in 156 00:11:42,019 --> 00:11:50,152 terms of misrepresentation in terms of these smaller numbers A, B, C and D. So X 157 00:11:50,152 --> 00:11:55,621 times Y, by representation, is just the same thing as (10 ^ (N/2) times A + B) times 158 00:11:55,621 --> 00:12:01,208 quantity (ten to the (N/2) C + D). And now, if we expand this in group like 159 00:12:01,208 --> 00:12:10,480 terms, we get a ten to the N times AC + a ten to the N/2 times AD + BC. So 160 00:12:10,480 --> 00:12:17,264 that's combining two terms from the expansion +BD. And I am gonna call this 161 00:12:17,264 --> 00:12:22,945 expression circled in green star(). Now what I want you to think about and make 162 00:12:22,945 --> 00:12:29,132 sure you understand is that, this expansion that I circled and called star, immediately 163 00:12:29,132 --> 00:12:32,415 suggests a recursive algorithm for multiplying two integers. Now it's not 164 00:12:32,415 --> 00:12:35,935 clear whether that algorithm's going to be fast or slow, or whether this is a good 165 00:12:35,935 --> 00:12:40,373 idea or a crackpot idea, but certainly it's a legitimate recursive approach to 166 00:12:40,373 --> 00:12:47,081 multiplying two integers. Specifically, the relevant products in star, namely AC, 167 00:12:47,081 --> 00:12:52,359 AD, BC, and BD all involve smaller numbers than what we started with. Those are all N/2 168 00:12:52,359 --> 00:12:55,951 digit numbers, whereas our original inputs had N digits. So we can 169 00:12:55,951 --> 00:13:00,871 legitimately solve those recursively. You can invoke our same algorithm to call to 170 00:13:00,871 --> 00:13:05,977 compute those in a recursive way. [sound]. Now, once we've 171 00:13:05,977 --> 00:13:10,382 invoked our multiplication algorithm recursively four times to compute these 172 00:13:10,382 --> 00:13:15,038 four relevant products. Now we can just, compute the expression in star in the 173 00:13:15,038 --> 00:13:20,237 obvious way. We take AD and BC, we add them together, using the just standard 174 00:13:20,237 --> 00:13:24,692 1st grade iterative addition algorithm. Then we pad both AC and the result of that 175 00:13:24,692 --> 00:13:28,629 addition by a bunch of zeroes -- N/2 zeroes or N zeroes as appropriate. Now 176 00:13:28,629 --> 00:13:32,485 we have these three constituent terms, and we just add them up again using the grade 177 00:13:32,485 --> 00:13:39,557 school algorithm to compute the overall expression. So to keep this intro lecture 178 00:13:39,557 --> 00:13:42,533 brisk, I haven't discussed the base case of this recursive algorithm. As hopefully 179 00:13:42,533 --> 00:13:46,150 you all know, recursive algorithms do need base cases. At some point, the recursion 180 00:13:46,150 --> 00:13:49,078 has gotta stop. Otherwise, your algorithm is never gonna terminate. So in 181 00:13:49,078 --> 00:13:52,605 multiplication, all you do is if you were passed as input two single digit numbers, you'd 182 00:13:52,605 --> 00:13:56,582 multiply them in one primitive operation, and return the result. If the numbers have 183 00:13:56,582 --> 00:13:59,877 two or more digits, you would recurse in the way we described here. If there's one 184 00:13:59,877 --> 00:14:05,453 digit, you just compute them, and you're done. So who knows whether this 185 00:14:05,453 --> 00:14:08,732 recursive algorithm is a good idea or not? It's totally not obvious. You should 186 00:14:08,732 --> 00:14:12,455 not even necessarily have any intuition about how this compares to the iterative 187 00:14:12,455 --> 00:14:15,615 algorithm you learned back in elementary school. So in lieu of answering that 188 00:14:15,615 --> 00:14:19,452 question, although we will answer it, several lectures hence, I wanna show you a 189 00:14:19,452 --> 00:14:25,179 second, still more clever, recursive algorithm for integer multiplication. So 190 00:14:25,179 --> 00:14:29,096 this goes by the name Karatsuba multiplication, after the founder, although really sort of 191 00:14:29,096 --> 00:14:33,519 the key point is a trick pointed out by the mathematician Gauss, in the early 192 00:14:33,519 --> 00:14:38,144 19th century. So to explain this algorithm, let me write once again our 193 00:14:38,144 --> 00:14:42,550 expansion in terms of the N/2 digit numbers, so we have the product X times Y, 194 00:14:42,550 --> 00:14:53,014 which we wrote as: (10^N)AC + (10^(N/2))(AD+BC) + BD. 195 00:14:53,014 --> 00:14:57,286 Now, the previous approach was, in recognition of the four products that we 196 00:14:57,286 --> 00:15:00,390 see in this expression, we made four recursive calls. So here's the trick, and 197 00:15:00,390 --> 00:15:04,107 this is really what Gauss figured out over 200 years ago, which is that, it 198 00:15:04,107 --> 00:15:07,699 seems there are four products, but fundamentally, in this expression, there's 199 00:15:07,699 --> 00:15:12,724 really only three quantities that we care about. There's the AC, the coefficient of this 200 00:15:12,724 --> 00:15:18,652 first term, there's AD+BC, the coefficient of this middle term, and there's BD. So we 201 00:15:18,652 --> 00:15:24,692 care about AD and BC as quantities individually, only in as much as we care 202 00:15:24,692 --> 00:15:28,196 about their sum. We really don't care about them individually. So that motivates 203 00:15:28,196 --> 00:15:32,637 the question. Perhaps, we can somehow uncover these three different quantities 204 00:15:32,637 --> 00:15:37,517 using only three recursive calls rather than four. In fact we can, and that's 205 00:15:37,517 --> 00:15:43,166 Karatsuba multiplication. So, to be more specific, our first 2 recursive 206 00:15:43,166 --> 00:15:49,990 calls are shared with the previous recursive algorithm. You go ahead and 207 00:15:49,990 --> 00:15:56,935 compute AC and BD recursively. The third step is the clever one where we 208 00:15:56,935 --> 00:16:04,036 recursively compute the product of quantity A plus B with C plus D. Now 209 00:16:04,036 --> 00:16:07,545 expanding, what are we going to get when we recursively compute this product? We're 210 00:16:07,545 --> 00:16:17,999 going to get AC plus BD plus AD plus BC. And here is Gauss' trick, for 211 00:16:17,999 --> 00:16:25,167 observation, which is that the result of the third recursive call minus each of the 212 00:16:25,167 --> 00:16:31,328 first two -- what happens. Well the AC is gonna to cancel the AC, the BD is gonna to 213 00:16:31,328 --> 00:16:39,472 cancel the BD. And we will be left with AD plus BC, which is exactly that middle 214 00:16:39,472 --> 00:16:43,704 co-efficient that we cared about. So, we can indeed compute the sum of AD and BC 215 00:16:43,704 --> 00:16:48,456 without separately computing each of them. And that's exactly what we wanted. So, 216 00:16:48,456 --> 00:16:52,776 what does this buy us? This gives us another second recursive algorithm for 217 00:16:52,776 --> 00:16:57,945 multiplying two integers, that makes use of fewer recursive calls, only three 218 00:16:57,945 --> 00:17:04,784 recursive calls, rather than four. And as before, there's a little bit of work to do 219 00:17:04,784 --> 00:17:09,200 beyond the recursive calls but not much. You just have to do some padding by zeros 220 00:17:09,200 --> 00:17:13,776 and adding up the results. So there's two points I want you to appreciate about all 221 00:17:13,776 --> 00:17:18,776 these integer multiplication algorithms. The first point is that the algorithm design's phase is 222 00:17:18,776 --> 00:17:22,951 incredibly rich. Much richer than you might have initially had intuition for. 223 00:17:22,951 --> 00:17:26,357 Multiplying two integers, what else can you do other than the elementary school algebra? 224 00:17:26,357 --> 00:17:29,533 Well, what have we seen? You can do a lot of other things. In particular, 225 00:17:29,533 --> 00:17:33,597 the second recursive algorithm looks nothing like what we learned back in grade 226 00:17:33,597 --> 00:17:37,085 school. And you'll see this over and over again, that, with sufficient ingenuity, 227 00:17:37,085 --> 00:17:41,509 sufficient cleverness, you can solve problems in ways much differently and 228 00:17:41,509 --> 00:17:46,507 sometimes much better than using the naive straightforward algorithm. The second 229 00:17:46,507 --> 00:17:50,514 takeaway point from this discussion of integer multiplication algorithms is that 230 00:17:50,514 --> 00:17:55,835 sometimes you have interesting algorithmic ideas for which it's totally not obvious 231 00:17:55,835 --> 00:17:59,418 what kinds of properties, or what kind of performance guarantees those algorithms 232 00:17:59,418 --> 00:18:02,802 enjoy. So in particular, confronted with these three different integer 233 00:18:02,802 --> 00:18:07,515 multiplication algorithms, how can we NOT wonder which of the three is the best? 234 00:18:07,515 --> 00:18:10,904 Which of these requires the fewest number recursive operations to multiply two 235 00:18:10,904 --> 00:18:15,216 integers? In particular, does one of these novel recursive approaches do better than 236 00:18:15,216 --> 00:18:18,327 the algorithm we learned back in elementary school? The answer to that 237 00:18:18,327 --> 00:18:21,816 question is totally non-obvious and it requires non-trivial mathematical analysis 238 00:18:21,816 --> 00:18:24,677 to answer. In the upcoming lectures I'll provide you with the tools to not only 239 00:18:24,677 --> 00:18:28,397 precisely understand and answer this question but more generally to predict the 240 00:18:28,397 --> 00:18:31,381 running times of an entire genre of divide-and-conquer algorithms, like 241 00:18:31,381 --> 00:18:56,320 Karatsuba multiplication. So, stay tuned.