[MUSIC] >> Finally, we need to address a technical issue that we hinted at when we normalized the weights of data points start to 1 over n, when we had uniform weights. Which is they should be normalizing weights of the data points throughout the iterations ,so for example if you take data point xi, suppose that it is often a mistake where multiplying its weight by a positive number again, and again and again. Let's say 2 times 2 times 2 times 2, and that weight of a can get extremely large. On the other hand, if you take the data point xi that's often correct, you multiply by some number less than one, so say a half. So you keep going by a half, a half, a half, and that weight can get really, really small. And so this problem can lead to numerical instabilities in the approach. And so your computer might not behave very well with all this crazy weighted data. And so what we do is after each iteration, we go ahead and normalize the weights of all data points to add up to one. So basically we divide each alpha i by the sum of the data points of alpha i, so this approach keeps the weights in a reasonable range and then avoids numerical instability. So let's summarize the AdaBoost algorithm. We start with, even weights, uniform weights, equal weights for all data points. We learn a classify F of T. We find its coefficient depending on how good it is in terms of weighted error. And then we update the weights to weigh mistakes more than things we got correct. And finally, we normalize the weights by dividing each value by this total sum of the weights. And this normalization is of practical importance. So this is the whole AdaBoost algorithm, it's beautiful, works extremely well, really easy to use. >> [MUSIC]