you've now learned so much about deep learning and sequence models that we can actually describe a trigger word system quite simply just on one slide as you see in this video but when the rise of speech recognition have been more and more devices you can wake up with your voice and those are sometimes called trigger word detection systems so let's see how you can build a trigger word system examples of triggering systems include Amazon echo which is broken out what that word elixir the by dou dou R or s part devices woken up with face duty how Apple Siri working out with hey Siri and Google home woken up with ok Google so stands the trigger word detection that if you have say an Amazon echo in your living room you can walk the living room and just say Alexa what time is it and have it wake up I'll be triggered by the words of exer and answer your voice query so if you can build a trigger word detection system maybe you can make your computer do something by telling it computer activate one of my friends also works on turning on an offer particular lamp using a trigger word kind of as a fun project but what I want to show you is how you can build a trigger word detection system now the trigger word detection literature is still evolving so there actually isn't a single universally agreed on algorithm for trigger word detection yet the literature on trigger word detection algorithm is still evolving so there isn't wide consensus yet on what's the best algorithm for trigger word detection so I'm just going to show you one example of an algorithm you can use now you've seen our ends like this and what we really do is take an audio clip maybe compute spectrogram features and that generates features x1 x2 x3 audio features x1 x2 x3 that you pass through an RNN and so all that remains to be done is to define the target labels Y so if this point in the audio clip is when someone just finished saying the trigger word such as a lecture or saline hey Suri or okay Google then in the training sets you can set the target labels to be zero for everything before that point and right after that to set the target label of one and then if a little bit later on you know the trigger word was set again and the trigger was said at this point then you can again set the target label to be one right after that now this type of labeling scheme for an RNN you know could work actually this won't actually work reasonably well one slight disadvantage of this is it creates a very imbalanced training set so if a lot more zeros than ones so one other thing you could do that it's getting a little bit of a hack but could make them all the little bit easy to train is instead of setting only a single time step to output one you can actually make an output a few ones for several times or for a fixed period of time before reverting back to zero so and that thumb slightly evens out the ratio of ones to zeros but this is a little bit of a hack but if this is when in the audio clipper trigger where the set then right after that you can set the toggle able to one and if this is the trigger words say the game then right after that just when you want the RNN to output one so you get to play more of this as well in the programming exercise but so I think you should feel quite proud of yourself we've learned enough about the learning that it just takes one picture at one slide to this to describe something as complicated as trigger word detection and based on this I hope you'd be able to implement something that works and allows you to detect trigger words but you see more of this in the program exercise so that's it for trigger words and I hope you feel quite proud of yourself for how much you've learned about deep learning that you can now describe trigger words in just one slide in a few minutes and that you've been hopeful II implemented and get it to work maybe even make it do something fun in your house that I'm like turn on or turn off um you could do something like a computer when you're when someone else says they trigger words on this is the last technical video of this course and to wrap up in this course on sequence models you learned about rnns including both gr use and LS TMS and then in the second week you learned a lot about word embeddings and how they learn representations of words and then in this week you learned about the attention model as well as how to use it to process audio data and I hope you have fun implementing all of these ideas in this beast program sighs let's go on to the last video