Hi. I hope you love paying into the details of TensorFlow and the experience of MNIST dataset but, of course, as you might have noticed that it can be tedious. It requires some duplicate work, duplicate lines of code. Of course, there are frameworks to allow you to make your life easier, to implement the common neural networks with much less effort. Now, we'll be using one of them called Keras. The choice here is a matter of taste and the particular problem's in front of you. We just picked one. Now, let us begin. We will be using the MNIST datasets or the latest set of handwritten digits as a whole of it, and the loader, and the import Keras. Also, what we do here is we map class. We use one-hot-encodings in order to get zeros and bonds from the classes labels, Matplotlib, and here's a five. Now, how do you think Keras will be using TensorFlows? This is the one we import as well and now we'll be using a simple multilayer perceptron. Here, we create a container, which will restore our layers. We define input layer. We know which will accept images, 28 by 28 pixels. Then, we'll flatten them because we will transform a two-dimensional matrix to one dimensional. Then, we will add the two dense layers. If you remember the beginning of this week, the dense layer is just a linear model. Then, we added an output layer and so we'll have a neuron for each class, and we'll apply the softmax function in order to transform outputs into probability. Then, the last touch is compiling the model. We add an optimization algorithm then we define those functions. So categorical_crossentropy is just the same crossentropy you're used to, but applied for one-hot-encoded vectors, and we define the metric accuracy. Good. Now, I have a question for you. How many parameters will a session network have? Let's answer it. Keras has nice summary facilities so here is our network. We begin with input and we enter into flatten. We go through two linear layers, and in the end, we added here the softmax. The basic interface is very simple. This is from scikit-learn or we joined just five passes, which should be rather fast even on GPU, and, of course, the interface for probability prediction is indeed very simple to predict the causes, probabilities for first elements. Models can be saved and loaded, model.save. Now, we can compute test accuracy. That's not very good. This is what we get by ending that model. What do you think is the problem? Well, of course, the problem is that we were stuck to two linear layers together and as you know already, two linear layers together are, by no means, a good learning models. So if we change activations from linear to say, relu, it should obtain a much better result. Almost none. What's the prob? A sudden jump in quality. Good. Now, one of your assignments will be to tune this network, to improve the quality so I invite you to add layers, and to play with activations. Before we get to actual hacking, there is one more thing. Keras is integrated to TensorBoard so it's sort of fun so there was the reason for choosing Keras, but the integration is, of course, very easy. You just add an option to fit function. If we run the training, we should see the TensorBoard, we should see the line graph. You can see them in terms of accuracy, in terms of train loss, and train accuracy, and the chance of validation accuracy, and validation loss. If you want to study Keras in more details, graph visualization, we'll help you. We'll create details here. As you can see, it's a bit nonhuman friendly. Going back and to summarize, Keras is a high-level framework, which makes a construction of neural networks easy. If you see here, we didn't do almost any unnecessary operations so each line of code adds something substantial to the model. Then, of course, as you'll be learning more about Deep Learning, you're also be learning more about how to run it at Keras. Now, for your assignment. Your assignment will be to improve the data quality. The suggestions here are very obvious. There are several ways. First is to add more layers, add more parameters. Of course, it will increase the computational cost, it creates the reason for overfitting, but otherwise, it's a classical way of improving that neural network performance. Another principle that you should also consider is not running the whole thing every time. When you see that the quality improvement stopped, then they should probably stop training. You should also experiment with different nonlinearities and probably different optimization algorithms. Some of them converge much faster than the others. Then, you could probably add regularization to your function. And, of course, Keras provides such functionality. The last thing probably not very relevant towards the numbers, that you can always get more data for free by using Keras tools that will zoom-rotate-slice, but keep in mind that these informations probably should make sense. This is all for this week's video materials. I hope you'll find your assignments and exercise enjoyable. Thank you.