Let's return, for a moment, to our hidden agenda, of trying to understand something about intelligence from all the stuff that we've learned so far. We figured out that classes can be learned from experience. Features can also be learned from experience. For example genres, That is classes as, as well as roles which can maybe a features merely from the experience of people buying books. What is the minimum capability needed to learn features and classes directly from data? This is a, rather carefully thought out question, So let me think about it for a minute. The first stage one needs some low level of perception, One needs to be able to perceive in the case of humans, pixels and frequencies or in the case of our systems, one needs to know, be able to identify the person by a person ID, the book by a book ID and that's about it. Second, one needs the ability to subitize which is another way of saying counting or distinguishing between one and two things. So it turns out that very young babies are actually able to distinguish between one person or two people, One object or two objects and they get surprised when suddenly one object disappears from the scene, So this is essentially something innate. Similarly, the ability to break up temporal experiences that one experiences over time into episodes that they experienced something in the past five minutes and then the next ten minutes another experience because suddenly,the scene has changed. To break up this episodes is another subitizing feature in time, which babies learn at a slightly later age. Given these two things, and our hidden latent model techniques, one can essentially in principle learn classes and features together simply from the fact that they co-occur together in experiences. Theoretically it works, But in practice, lots of research is currently underway to enable machines to learn in an unsupervised manner, both the classes as well as the features. So you're clustering the classes, you're clustering the features side-by-side, using the fact that classes and features co-occur in different experiences or objects to learn both together. So these is really at the frontier of research today in both from web intelligence as well as understanding human intelligence, to a certain extent. So when you come across articles which talk about bottom up learning or grounded techniques, essentially they're talking about things like this where one is trying to learn a hidden or latent model directly without supervision. Of course one might really ask, to what extent have we actually learned anything in the true or pure sense of the word. In fact, in a rather celebrated 1980 article the Philosopher, Philosopher John Searle refuted any suggestion that mechanical reasoning using all manner of learned facts or, Or, or rules could actually be considered intelligent and this is the argument he used reminiscent of the Turing test, in fact. Sir imagined a room where a person, In this case, it's a bird, But he imagined a person armed with rules and facts and reasoning techniques that allowed that person to translate from Chinese to English using mechanical calculations and facts and rules about how to translate. The question, so the last was does the translator know Chinese in the sense that a native speaker of Chinese knows Chinese. He argued vehemently that this person could not in any sense be construed to know Chinese, and therefore the prospect of machine intelligence divorced from any direct perception or abilities in, Say the language, Chinese was actually a fallacy. Interestingly, exactly such techniques such as we have discussed here as well as new ones that we'll talk about next time like hidden Markov models, are actually used to parse and translate Chinese into English fairly well today in Google Translate. The question posed by Sir has now become popularly and known as the Chinese room debate and is certainly worth pondering about, when one talks about web intelligence as having taught us anything about our own abilities.