[MUSIC] In this module we address a very fundamental concept, a concept of having missing data. And missing data can impact us both in the training time And a prediction time. For both cases, we explored fundamental ideas that are useful for a wide range of algorithms, not just decision trees. We explore the idea of just skipping data points, which has its benefits and pitfalls, the idea of trying to impute or guess what those missing values are. And the idea of modifying the actual learning algorithm, in particular with decision trees, in order to better deal with missing data. Now, in practice, you will often see missing data and you should be always on the lookout for missing data. And sometimes our data comes in, in a way, value is not just explicitly missing. So for example, sometimes people put in zero, when it's unknown. And you might think it's zero. But it's really unknown. So you should always be on the lookout for missing data. And you should always take it very carefully, because it really impact the answers of your algorithm. Today we've seen some basic approach dealing with that. Of course, there are more advanced ones that you can get into. But this is a fundamental area we should always be on the lookout for. And let me close, again, by thanking my colleague, Krishna Sridhar, who's really been instrumental in the creation of the slides and helping with the overall vision of this module.