[MUSIC] Now that we've defined a notation, let's go back to a decision boundary example and look at the impact of those coefficients that we've learned on the actual decision boundary that we've obtained. So in the example that we had, the score was defined by 1.0 times awesome- 1.5 times awful. That means that, W1 was 1.0, W2 was -1.5, and here, W0 was 0. I didn't show that 0 there, it was kind of saying that 0 is 0. And that's how we got a decision boundary, where the score below the line was greater than 0 and the score above the line was less than 0 and that's what made the predictions be positive on one side, negative on the other. Now, lets say that instead I had learned that the coefficient W0 was 1.0, instead of 0. What does that mean? That means that our Score function now has this extra term, 1.0 times + 1.0 times awesome- 1.5 times awful. So what happens to the line to that decision boundary? Well, that line gets shifted up. And so if you look at that point on the lower left, which is close to 0.0, which before we predicted as being a negative review. After we make that change, we now predict it to be a positive review, so it turns from orange to blue. On the other hand, if you take the coefficient awful, which is now -1.5, and we increase it, we say it's -3.0, so awfuls are just really awful. What happens to our equation? Well, the -1.5 gets replaced by -3.0, so it becomes 1.0 + 1.0#awesomes,- 3 times #awfuls. And that curve tilts down a little bit. So if you look at that new point that was on the positive prediction side, it gets shifted to the other side of the decision boundary and it turns from blue to orange and that's because you have two awfuls in it and awfuls are just awfuls. And before they were counter balanced by the four awesomes in that data point, but now the four awesomes can't counter balance the two awfuls. [MUSIC]