Question 1

If you have 10,000,000 examples, how would you split the train/dev/test set?


Question 2

The dev and test set should:


Question 3

If your Neural Network model seems to have high bias, what of the following would be promising things to try? (Check all that apply.)


Question 4

You are working on an automated check-out kiosk for a supermarket, and are building a classifier for apples, bananas and oranges. Suppose your classifier obtains a training set error of 0.5%, and a dev set error of 7%. Which of the following are promising things to try to improve your classifier? (Check all that apply.)


Question 5

What is weight decay?


Question 6

What happens when you increase the regularization hyperparameter lambda?


Question 7

With the inverted dropout technique, at test time:


Question 8

Increasing the parameter keep_prob from (say) 0.5 to 0.6 will likely cause the following: (Check the two that apply)


Question 9

Which of these techniques are useful for reducing variance (reducing overfitting)? (Check all that apply.)


Question 10

Why do we normalize the inputs $$x$$?