[MUSIC] So we're decreasing the font and we're gonna train a basic model on this data set. So the first thing I am going to do is not use the pictures but just use the pixels of the images and train a classifier on the pixels of the images and see how well that does. So let's just go ahead and do it. So #Train a classifier. I misspelled it here. So #Train a classifier on the raw image pixels. So these are going to be my features, the raw image pixels, not those deep features. And so I'm going to call this the raw, sorry, raw_pixel_model. And the classifier I'm gonna use, I could use any classifier. The performance would be about the same. I'm gonna use logistic regression, which is what we use also in the sentiment analysis notebook. And so in logistic regression, we're gonna give it a data set. So this is gonna be the training data set. This is my image_train. I'm going to tell it what the target is. So the target is a column in this data set called label. And that column has the label for each image. And I'm going to tell it what features to use. So in this case, the features are a column called image_array. And so I'm gonna run this. Wait, I must misspell, module, oh yes, I forgot one thing and you should have reminded me. In GraphLab Create, the verb that you do to create the model is create. So GraphLab the logistic regression, logistic_classifier.create creates that classifier. So now, it should execute. So we're using the raw image pixels to build a classifier. And just as a quick review, I don't expect that to be very good. But let's go ahead and do it. It just is done, right now. So let's see how it performed. So the first thing that I'm gonna do is just make a prediction with the model. So let's #Make a prediction with the simple model based on raw pixels. So I'm gonna take some images from the test data set, and I'm gonna see what the classifier says they are. So I'm gonna say image_test, and let's look at the first three images. So images 0:3 goes images 0, 1, and 2. And I'm just gonna look at those images. And I'm gonna type .show() on them just to show you what those images are. So here we are. The images, as I said, are a little bit small on the data set. But I'm gonna make this bigger so you can see, the images here are a cat, a car, and another cat. So two cats and a car. Car, cat, no. Cat, car, cat. And so, I'm just gonna leave the font big so we can actually see the results. So now, let's look at the labels for this image. So this is the test data. And this is elements 0 through 3, so the first one, three. I visually said it was cat, car, cat, but let's see what the actual labels are. So, the label column and you'll see it is cat, automobile, cat. So it's automobile, not car, but same thing. Okay so let's see what this raw_pixel_model predicts. So let me see what it predicts for this data set of three images. So the image_test of 0:3. So the truth is cat, automobile, cat and it predicts bird, cat, bird. So it gets all three wrong, thinks the cat is a bird, it thinks that car is a cat and thinks the other cat is also bird. So, it got all wrong, but this is only kind of a fluke qualitative thing. It could be that just the first few images it got wrong, so let's take a look at what the actual predictions are. So, let's evaluate the model of the test data and see what the actual classification accuracy is. So, #Evaluating raw pixel model on test data. So I'm going to take this raw_pixel_model and I'm going to call evaluate, and what evaluate does is it just goes and computes some of those errors, those error metrics, and I'm gonna use the test set. And here you see the accuracy is only 46.8%. It's terrible. You have four classes, random guess is 25%, they're balanced. Here it gets 46%, not that exciting. So there's more details there. The confusion matrix gets plotted. I'm not gonna go through that, but at home you can explore those a little bit more. But just remember 46%. [MUSIC]