1 00:00:00,000 --> 00:00:04,317 Having completed our first analysis of an algorithm namely an upper bound on the 2 00:00:04,317 --> 00:00:08,422 running time of the merge sort algorithm. What I want to do next is take a step 3 00:00:08,422 --> 00:00:10,820 back and be explicit about three assumptions. 4 00:00:10,820 --> 00:00:14,765 Three biases that we made when we did this analysis of the merge sort and 5 00:00:14,765 --> 00:00:18,229 interpreted the results. These three assumptions we will adopt as 6 00:00:18,229 --> 00:00:22,334 guiding principles for how to reason about algorithms and how to define a so 7 00:00:22,334 --> 00:00:24,946 called fast algorithm for the rest of the course. 8 00:00:24,946 --> 00:00:29,104 So the first guiding principle is that we used what's often called worst case 9 00:00:29,104 --> 00:00:32,028 analysis. By worst case analysis I simply mean that 10 00:00:32,028 --> 00:00:36,892 our upper bound of 6 N log N plus 6 N applies to the number of line executed 11 00:00:36,892 --> 00:00:41,773 for every single input array of link N.. We made absolutely not assumptions about 12 00:00:41,773 --> 00:00:46,593 the input, where it comes from, what it looks like, beyond what the input link in 13 00:00:46,593 --> 00:00:51,658 was but differently if hypothetically we have some adversary whose sole purpose in 14 00:00:51,658 --> 00:00:56,661 life was to concoct some level of input designed to make our algorithm as slow as 15 00:00:56,661 --> 00:00:59,590 possible. The worse this adversary could do is 16 00:00:59,590 --> 00:01:02,965 upper bounded by this same number, 6 N log N + 6 N. 17 00:01:02,965 --> 00:01:07,881 Now this so sort of worst case guarantee popped out so naturally from our analysis 18 00:01:07,881 --> 00:01:11,731 of MergeSort you might well be wondering what else could you do? 19 00:01:11,731 --> 00:01:16,054 Well, two other methods of analysis, which do have their place, although we 20 00:01:16,054 --> 00:01:21,621 won't really discuss them in this course, are Average" Case" analysis and also the 21 00:01:21,621 --> 00:01:25,868 use of a set of pre-specified benchmarks. My average case analysis I mean you 22 00:01:25,868 --> 00:01:29,909 analyze the average running time of an algorithm under some assumption about the 23 00:01:29,909 --> 00:01:33,899 relative frequencies in different inputs. So for example in the sorting problem, 24 00:01:33,899 --> 00:01:37,183 one thing you could do although that is not what we did here, 25 00:01:37,183 --> 00:01:41,393 you could assume that every possible input array is equally or likely and then 26 00:01:41,393 --> 00:01:44,516 analyze the average running time of an algorithm. 27 00:01:44,516 --> 00:01:49,231 By, benchmarks I just mean that one agrees upfront about some sets say, ten 28 00:01:49,231 --> 00:01:52,081 or twenty benchmark inputs, which are thought to 29 00:01:52,081 --> 00:01:55,061 represent practical or typical inputs for the algorithm. 30 00:01:55,061 --> 00:01:58,732 Now both average case analysist and benchmarks are useful in certain 31 00:01:58,732 --> 00:02:02,829 settings, but for them to make sense you really have to have domain knowledge 32 00:02:02,829 --> 00:02:07,033 about your problem, you need to have some understanding of what inputs are more 33 00:02:07,033 --> 00:02:11,024 common than others, what inputs better represent typical inputs that others. 34 00:02:11,024 --> 00:02:15,121 By contrast, in worse case analysis, by definition you're making absolutely no 35 00:02:15,121 --> 00:02:18,898 assumptions about where the input comes from, so as a result, worse case 36 00:02:18,898 --> 00:02:23,208 analysis, is particularly appropriate for general purpose subroutines, subroutines 37 00:02:23,208 --> 00:02:27,322 that you design without having any knowledge of how they will be used or 38 00:02:27,322 --> 00:02:32,251 what kind of inputs they will be used on. And happily, another bonus of doing worst 39 00:02:32,251 --> 00:02:36,440 case analysis, as we will in this course, it's usually mathematically much more 40 00:02:36,440 --> 00:02:40,683 tractable than trying to analyze the average performance of an algorithm under 41 00:02:40,683 --> 00:02:44,550 some distribution over inputs. Or to understand the detailed behavior of 42 00:02:44,550 --> 00:02:47,396 an algorithm on a particular set of benchmark inputs. 43 00:02:47,396 --> 00:02:51,263 This mathematical tractability was reflected in our merge sort analysis, 44 00:02:51,263 --> 00:02:54,807 where we had no operatory goal of analyzing the worst case, per se. 45 00:02:54,807 --> 00:02:59,211 But it's naturally what popped out of our reasoning about the algorithm's running 46 00:02:59,211 --> 00:03:01,674 time. The second and third guiding principals 47 00:03:01,674 --> 00:03:05,733 are closely related the second one is that in this course when we analyze 48 00:03:05,733 --> 00:03:09,958 algorithms we won't worry unduely about small costing factors or lower order 49 00:03:09,958 --> 00:03:12,591 terms. We saw this philosophy at work very early 50 00:03:12,591 --> 00:03:16,979 on in our analysis of merge shore when we discussed the number of lines of code 51 00:03:16,979 --> 00:03:21,086 that the merge subroutine requires. We first upper-bounded it by 4M plus two 52 00:03:21,086 --> 00:03:25,193 for an array of length M and then we said eh, let's just think about it as 6M 53 00:03:25,193 --> 00:03:28,766 instead, let's have a simpler sloppy upper bound and work with that. 54 00:03:28,766 --> 00:03:32,713 So that was already an example of not worrying about small changes in the 55 00:03:32,713 --> 00:03:35,753 constant factor. Now the question you should be wondering 56 00:03:35,753 --> 00:03:39,060 about is, why do we do this and can we really get away with it? 57 00:03:39,060 --> 00:03:41,620 So let me tell you about the justifications for this kind of 58 00:03:41,620 --> 00:03:44,202 principle. So the first motivation is clear, and we 59 00:03:44,202 --> 00:03:48,204 used it already in our MergeSort analysis which is its simply way easier 60 00:03:48,204 --> 00:03:52,043 mathematically, if we don't have to precisely pin down what the leading 61 00:03:52,043 --> 00:03:56,570 constant factors and the rotor terms are. The second justification is a little less 62 00:03:56,570 --> 00:04:00,915 obvious, but it's extremely important. So I claim that giving the level at which 63 00:04:00,915 --> 00:04:05,315 we were describing algorithms in this course it would be totally in appropriate 64 00:04:05,315 --> 00:04:08,230 to obsess unduly about what the constant factors are. 65 00:04:08,230 --> 00:04:12,410 Recall our discussion of the merge sub routine, so we wrote that sub routine 66 00:04:12,410 --> 00:04:16,975 down in pseudo code and we gave it 4 N + 2 on the number of lines of code executed 67 00:04:16,975 --> 00:04:20,440 giving an input of length M. We also noted that it was somewhat 68 00:04:20,440 --> 00:04:24,840 ambiguous exactly how many lines of code we should count it as depending on how 69 00:04:24,840 --> 00:04:28,812 you count loop increments and so on. So even there, small constant factors 70 00:04:28,812 --> 00:04:32,589 could creep in given the under-specification of the pseudo-code. 71 00:04:32,589 --> 00:04:37,133 Depending on how that pseudo-code gets translated into an actual programming 72 00:04:37,133 --> 00:04:40,788 language, like C or Java. You'll see the number of lines of code 73 00:04:40,788 --> 00:04:44,805 deviate even further, not by a lot but again by small constant factors. 74 00:04:44,805 --> 00:04:49,281 When such a program is then compiled down into machine code, you'll see even a 75 00:04:49,281 --> 00:04:53,643 greater variance depending on the exact processor, the compiler, the compiler 76 00:04:53,643 --> 00:04:56,742 optimizations, the programing implementation and so on. 77 00:04:56,742 --> 00:05:01,053 So to summarize because we're going to describe algorithms at a level that 78 00:05:01,053 --> 00:05:05,514 transcends any particular programming language it would be inappropriate to 79 00:05:05,514 --> 00:05:09,506 specify a precise consonance. The precise consonance will ultimately 80 00:05:09,506 --> 00:05:14,202 determined by more machine dependent aspects like who the programmer is, what 81 00:05:14,202 --> 00:05:17,020 the compiler is, what the processor is and so on. 82 00:05:17,020 --> 00:05:21,102 And now the third justification is, frankly, we're just going to be able to 83 00:05:21,102 --> 00:05:25,515 get away with it. That is, one might be concerned that 84 00:05:25,515 --> 00:05:28,892 ignoring things like small constant factors leads us astray. 85 00:05:28,892 --> 00:05:33,265 That we wind up deriving results, which suggest that an algorithm is fast when 86 00:05:33,265 --> 00:05:37,403 it's really slow in practice or vice versa, but for the problems we discuss in 87 00:05:37,403 --> 00:05:41,657 this course, we'll get extremely accurate predictive power even though we won't be 88 00:05:41,657 --> 00:05:44,111 keeping track of lower terms and constant factors. 89 00:05:44,111 --> 00:05:47,645 When the mathematical analysis we do suggests that an algorithm is fast, 90 00:05:47,645 --> 00:05:50,295 indeed it will be. When it suggests that it's not fast, 91 00:05:50,295 --> 00:05:53,780 indeed that will be the case. So we lose a little bit of granularity of 92 00:05:53,780 --> 00:05:57,559 information but we don't lose on what we really care about which is accurate 93 00:05:57,559 --> 00:06:00,210 guidance about what algorithms are going to be faster. 94 00:06:00,210 --> 00:06:02,837 Than others. So the first two justifications, I think, 95 00:06:02,837 --> 00:06:06,021 are pretty self evident. This third justification is more of an 96 00:06:06,021 --> 00:06:09,911 assertion, but it's one we'll be backing up over and over again as we proceed 97 00:06:09,911 --> 00:06:12,842 through this course. Now don't get me wrong, I'm not saying 98 00:06:12,842 --> 00:06:15,167 constant factors aren't important in practice. 99 00:06:15,167 --> 00:06:18,855 Obviously, for crucial programs, the constant factors are hugely important. 100 00:06:18,855 --> 00:06:22,443 If you're running the, sort of, crucial loop for, you know, your startup's 101 00:06:22,443 --> 00:06:25,778 survival depends on, by all means, optimize the constant like crazy. 102 00:06:25,778 --> 00:06:29,871 The point is just that understanding tiny constant factors in the analysis is an 103 00:06:29,871 --> 00:06:33,856 inappropriate level of granularity for the kind of algorithm analysis we're 104 00:06:33,856 --> 00:06:35,925 going to be doing in this course. Okay. 105 00:06:35,925 --> 00:06:39,070 Let's move on to the third and final, guiding principle. 106 00:06:39,070 --> 00:06:43,467 So the third principle is that we're going to use what's called Asintonic 107 00:06:43,467 --> 00:06:47,745 analysis, by which I mean we will focus on the case of large input sizes. 108 00:06:47,745 --> 00:06:52,440 The performance of an algorithm as the size N of the input grows large, that is, 109 00:06:52,440 --> 00:06:55,883 tends to infinity. Now this focus on large input sizes was 110 00:06:55,883 --> 00:07:00,807 already evident when we interpreted our bound on MergeSort so how did we describe 111 00:07:00,807 --> 00:07:05,078 the bound in MergeSort? We said oh, well it needs a number of operations 112 00:07:05,078 --> 00:07:07,748 proportional a constant factor times N log N. 113 00:07:07,748 --> 00:07:12,316 And we very cavalierly declared this was better than any algorithm which has 114 00:07:12,316 --> 00:07:16,410 quadratic dependence of its running time on the number of operations. 115 00:07:16,410 --> 00:07:21,675 So for example we argue that MergeSort is a better, faster algorithm than something 116 00:07:21,675 --> 00:07:26,440 like insertion sort without actually discussing the cost and factors at all. 117 00:07:26,440 --> 00:07:30,861 So mathematically, we were saying the running time of merge 118 00:07:30,861 --> 00:07:33,723 short. Which we know, which we can represent as 119 00:07:33,723 --> 00:07:38,960 the function, six analog base true of N + X N is better than any function which has 120 00:07:38,960 --> 00:07:43,627 a quadratic dependence on N even one with a small constant like let's say one half 121 00:07:43,627 --> 00:07:47,450 N squared which might be roughly the running time of insertion sort. 122 00:07:47,450 --> 00:07:51,508 And this is a mathematical statement that is true, if and only if, N is 123 00:07:51,508 --> 00:07:54,870 sufficiently large. Once N grows large it's certainly true 124 00:07:54,870 --> 00:07:59,276 that the expression on the left is smaller than the expression on the right. 125 00:07:59,276 --> 00:08:03,682 But for small N, the expression on the right is actually going to be smaller 126 00:08:03,682 --> 00:08:07,972 because of the smaller leading term. So, in saying that MergeSort is superior 127 00:08:07,972 --> 00:08:12,378 to insertion sort, the bias is that we're focusing on problems with large N. 128 00:08:12,378 --> 00:08:15,508 So, the question you should have is, is that reasonable? 129 00:08:15,508 --> 00:08:19,044 Is that a justified assumption to focus on, large input sizes? 130 00:08:19,044 --> 00:08:22,909 And, the answer is certainly, yes. So the reason we focus on large input 131 00:08:22,909 --> 00:08:27,193 sizes is because frankly those are the only problems which are even which are at 132 00:08:27,193 --> 00:08:30,261 all interesting. If all you need to do is sort 100 numbers 133 00:08:30,261 --> 00:08:34,439 use whatever method you want and its going to happen instantaneously on modern 134 00:08:34,439 --> 00:08:37,189 computers. You don't need to know say the divide and 135 00:08:37,189 --> 00:08:40,310 conquer paradigm if all you need to do is sort 100 numbers. 136 00:08:40,310 --> 00:08:44,339 So one thing you might be wondering is if with computers getting faster all the 137 00:08:44,339 --> 00:08:48,267 time according to Moore's Law if really it doesn't even matter to think about 138 00:08:48,267 --> 00:08:51,389 algorithmic analysis. If eventually all problematic sizes will 139 00:08:51,389 --> 00:08:55,318 just be tribally solvable on super fast computers but in fact the opposite is 140 00:08:55,318 --> 00:08:57,853 true. Moore's law with computers getting faster 141 00:08:57,853 --> 00:09:01,660 actually says that our computational ambitions will naturally grow, we 142 00:09:01,660 --> 00:09:05,956 naturally focus on ever larger problem sizes and the gulf between an n squared 143 00:09:05,956 --> 00:09:09,110 algorithm and an N log N algorithm will become ever wider. 144 00:09:09,110 --> 00:09:13,570 A different way to think about it is how much bigger a problem size you can solve. 145 00:09:13,570 --> 00:09:16,651 As computers get faster. If you're using an algorithm with a 146 00:09:16,651 --> 00:09:19,373 running time which is proportional to the input size. 147 00:09:19,373 --> 00:09:23,328 Then if computers get faster by a factor of four, then you can solve problems 148 00:09:23,328 --> 00:09:27,025 which were a factor of four larger. Whereas, if you're using an algorithm 149 00:09:27,025 --> 00:09:30,929 whose running time is proportional to the square of the input size, then the 150 00:09:30,929 --> 00:09:34,986 computers get faster by a factor of four. You can only solve double the problem 151 00:09:34,986 --> 00:09:39,403 size and mostly even starker examples of this gulf between different algorithmic 152 00:09:39,403 --> 00:09:43,280 approaches as time goes on. So to drive this point home, let me show 153 00:09:43,280 --> 00:09:47,139 you a couple of graphs. So what we're looking at here is, we're 154 00:09:47,139 --> 00:09:52,326 looking at a graph of two functions. So the solid function is the upper bound 155 00:09:52,326 --> 00:09:58,466 that we proved on MergeShort so this is going to be 6 N log base 2 of N + 6 N. 156 00:09:58,466 --> 00:10:04,357 And the dotted line is a estimate or rather generous estimate about the 157 00:10:04,357 --> 00:10:08,505 running time of insertion sort. Namely, one half * N2. 158 00:10:08,505 --> 00:10:11,741 squared. And we see here, in the graph, exactly 159 00:10:11,741 --> 00:10:17,540 the behavior we discussed earlier. Which is that, for small n, 160 00:10:17,540 --> 00:10:21,089 down here. In fact, because one half inch squared 161 00:10:21,089 --> 00:10:25,748 has a smaller leading constant its actually a smaller function. 162 00:10:25,748 --> 00:10:30,260 And this is true up to this crossing point, of maybe 90 or so. 163 00:10:30,260 --> 00:10:35,708 But then beyond N equal 90. The quadratic growth in the N squared 164 00:10:35,708 --> 00:10:39,692 term overwhelms the fact that it had a smaller constant and it starts getting 165 00:10:39,692 --> 00:10:42,131 bigger than this other function 6N login plus 6N. 166 00:10:42,131 --> 00:10:45,813 So that in the regime below 90 its predicting that insertion sort will be 167 00:10:45,813 --> 00:10:49,944 better and in the regime above 90 its predicting that MergeSort will be faster. 168 00:10:49,944 --> 00:10:53,975 Now here's what's interesting let's scale the X axis let's look well beyond this 169 00:10:53,975 --> 00:10:58,106 crossing point of 90 let's just increase it in order of magnitude up to a raise of 170 00:10:58,106 --> 00:11:00,594 size 1500. And I want to emphasize these are still 171 00:11:00,594 --> 00:11:04,575 very small problem sizes if all you need to do to is sort of raise of size 1500 172 00:11:04,575 --> 00:11:08,448 you really don't need to know divide and conquer or anything else we'll talk 173 00:11:08,448 --> 00:11:11,271 about. That's a pretty trivial problem on modern 174 00:11:11,271 --> 00:11:14,064 computers. So what we're seeing is that, even for 175 00:11:14,064 --> 00:11:17,371 very modest problem sizes. Here an array of say, size 1500. 176 00:11:17,371 --> 00:11:21,258 The quadratic dependence and the insertion sort bound is more than 177 00:11:21,258 --> 00:11:24,681 dwarfing the fact that it had a lower than constant factor. 178 00:11:24,681 --> 00:11:28,800 So in this large regime, the gulf between the two algorithms is growing. 179 00:11:28,800 --> 00:11:33,384 And of course, if I increased it another 10X or 100X or 1000X to get to genuinely 180 00:11:33,384 --> 00:11:36,964 interesting problem sizes the gap between these two algorithms would be even 181 00:11:36,964 --> 00:11:39,893 bigger, it would be huge. That said I'm not saying you should be 182 00:11:39,893 --> 00:11:43,147 completely ignorant of cost and factors when you implement algorithms. 183 00:11:43,147 --> 00:11:46,913 It's still good to have a general sense of what these cost and factors are so for 184 00:11:46,913 --> 00:11:50,539 example in highly tuned versions of merge sort in which you will find in many 185 00:11:50,539 --> 00:11:53,846 programming libraries. In fact, because of the difference in 186 00:11:53,846 --> 00:11:56,898 constant factors. The algorithm will actually switch from 187 00:11:56,898 --> 00:12:00,807 merge sort, over to insertion sort. Once the problem size drops below some 188 00:12:00,807 --> 00:12:03,913 particular threshold. Say, seven elements or something like 189 00:12:03,913 --> 00:12:06,269 that. So for small problem sizes, you use the 190 00:12:06,269 --> 00:12:09,696 algorithm with smaller constant factors. Then the insertion sort. 191 00:12:09,696 --> 00:12:13,926 For larger problem sa, sizes, you use the algorithm with a better rate of growth, 192 00:12:13,926 --> 00:12:16,468 namely, merge sort. So, to review, our first guiding 193 00:12:16,468 --> 00:12:19,523 principle is that we're going to pursue worst case analysis. 194 00:12:19,523 --> 00:12:23,341 We're going to look to bounds on the performance, on the running time of an 195 00:12:23,341 --> 00:12:26,854 algorithm, which make no domain assumptions, which make no assumptions 196 00:12:26,854 --> 00:12:30,164 about which input of a given length the algorithm is provided. 197 00:12:30,164 --> 00:12:34,084 The second guiding principle is we're not going to focus on constant factors or 198 00:12:34,084 --> 00:12:36,833 lower our terms. That would be inappropriate, given the 199 00:12:36,833 --> 00:12:39,786 level of granularity at which we're describing algorithms. 200 00:12:39,786 --> 00:12:43,707 And third is we're going to focus on the rate of growth of algorithms for large 201 00:12:43,707 --> 00:12:46,776 problem sizes. Putting these three principles together 202 00:12:46,776 --> 00:12:49,956 we get a mathematical definitions of a fast algorithm. 203 00:12:49,956 --> 00:12:54,433 Mainly, we're going to pursue algorithms whose worse case running time grows 204 00:12:54,433 --> 00:12:58,733 slowly as a function of the input size. So let me tell you how you should 205 00:12:58,733 --> 00:13:01,383 interpret what I just wrote down in this box. 206 00:13:01,383 --> 00:13:04,270 So on the left hand side is clearly what we want. 207 00:13:04,270 --> 00:13:07,438 Okay we want algorithms which run quickly if we implement them. 208 00:13:07,438 --> 00:13:11,161 And on the right hand side is a proposed mathematical surrogate of a fast 209 00:13:11,161 --> 00:13:11,965 algorithm, right? 210 00:13:11,965 --> 00:13:14,581 The left hand side is not a mathematical definition. 211 00:13:14,581 --> 00:13:18,303 The right hand side is as we'll become clearer in the next set of lectures. 212 00:13:18,303 --> 00:13:22,277 So we're identifying fast algorithms which are those that have good asymptotic 213 00:13:22,277 --> 00:13:24,993 running time. Running time which grows slowly with the 214 00:13:24,993 --> 00:13:27,055 input size. Now what would we want from a 215 00:13:27,055 --> 00:13:29,420 mathematical definition. We'd want a sweet spot. 216 00:13:29,420 --> 00:13:33,094 Okay, on the one hand we want something we can actually reason about. 217 00:13:33,094 --> 00:13:37,524 This is why we zoom out and squint and ignore things like constant factors and 218 00:13:37,524 --> 00:13:40,280 moreover returns. We can't keep track of everything. 219 00:13:40,280 --> 00:13:42,524 Otherwise, we'd never be able to analyze stuff. 220 00:13:42,524 --> 00:13:45,989 On the other hand, we don't want to throw out the baby with the bathwater. 221 00:13:45,989 --> 00:13:49,698 We want to retain predictive power. And this turns out, this definition turns 222 00:13:49,698 --> 00:13:53,504 out, for the problems we're going to talk about in this course, to be the sweet 223 00:13:53,504 --> 00:13:56,969 spot for reasoning about algorithms, okay? worst case analysis, using the 224 00:13:56,969 --> 00:14:00,092 asymptotic running time. We'll be able to prove lots of theorems, 225 00:14:00,092 --> 00:14:03,752 we'll be able to establish a lot of performance guarantees for fundamental 226 00:14:03,752 --> 00:14:06,094 algorithms. But at the same time, we'll have good 227 00:14:06,094 --> 00:14:08,827 predictive power. What the theory advocates will, in fact, 228 00:14:08,827 --> 00:14:12,420 be algorithms that are. Well known to be, fast in practice. 229 00:14:12,420 --> 00:14:16,453 So the final explanation I owe you, is, what do I mean by, the running time grows 230 00:14:16,453 --> 00:14:20,435 slowly with respect to the input size? Well, the answer depends a little bit on 231 00:14:20,435 --> 00:14:23,140 the context. But for almost all of the problems we're 232 00:14:23,140 --> 00:14:27,020 going to discuss, the Holy Grail will be to have what's called a linear time 233 00:14:27,020 --> 00:14:29,675 algorithm. An algorithm whose number of instructions 234 00:14:29,675 --> 00:14:33,504 grows proportional to the input size. So we won't always be able to achieve 235 00:14:33,504 --> 00:14:36,107 linear time. But that's, in some sense, the best case 236 00:14:36,107 --> 00:14:38,558 scenario. Notice, linear time is even better than 237 00:14:38,558 --> 00:14:41,621 what we achieved with our merge short algorithm for sorting. 238 00:14:41,621 --> 00:14:45,807 MergeShort runs a little bit superlinear. It's N * Log N where N is the input size. 239 00:14:45,807 --> 00:14:48,042 If possible. We'd love to be linear time, it's not 240 00:14:48,042 --> 00:14:50,980 always going to be possible, but that is what we will aspire toward. 241 00:14:50,980 --> 00:14:54,010 For most of the problems we'll discuss in this course. 242 00:14:54,010 --> 00:14:58,446 Looking ahead the next series of videos is going to have two goals, first of all 243 00:14:58,446 --> 00:15:02,327 on the analysis side I'll describe formally what I mean by asymptotic 244 00:15:02,327 --> 00:15:06,431 running time, I'll introduce bigger notation and its variants, explain it's 245 00:15:06,431 --> 00:15:09,481 mathematical definitions and give a number of examples. 246 00:15:09,481 --> 00:15:13,307 On the design side we'll get more experience applying the divide and 247 00:15:13,307 --> 00:15:16,080 conquer paradigm to further problems, see you then.