[MUSIC] Let's train the sentiment classifier. And we're gonna do this in two steps. First, we're gonna do a train test split of the data. So I'm gonna compute the training data. We're gonna split the data into training data. And test data just like we talked in the regression class and just like we did in the regression notebook. So we're gonna use, we'll take this product table. Products. And then we're going to do the random split. So, oops. Products. And then we're gonna do that random split, where we're gonna do 80% for training, 20% for testing. And just so that you can reproduce this at home, I'm gonna do the seed = 0. Just like we discussed in regression. Normally you wouldn't do this, you pick another random seed, but I wanted the random seed to be the same, so when you do it, you get exactly the same results I did. So this is my first step, train test split on this data set, and now we're ready. We're gonna build that famous sentiment model. And here, we're going to use graphlab and we're going to use a particular classifier called the logistic classifier. And in the course on classification, we're going to learn a lot about different kinds of classifiers like logistic regression, this one, support vector machines Decision trees and others. But let's start with just a logistic classifier. And just like in you can type .create after the name and it'll actually create the classifier for you. And as input, it takes us a few parameters. So we're gonna take the train data, as one parameter. Then we're gonna see that the target, the thing we're trying to classify, is the sentiment column. And then we're gonna have to tell it what features to use. So for the features we're going to use just the word count column. So this is the new column that we've created above for word count. And, I'm going to give that a validation set. So the validation set is going to be my test_data. So validation_set=test_data. Okay. So now we execute the cell. And we shall be a building a sentiment classifier model. And we're only gonna take a few seconds, and here we go. It's done. And you will see [INAUDIBLE] the data, and the validation accuracy as it goes along it seems to be getting better and better. But let's actually do a peer evaluation. [MUSIC]