Consider using this encoder-decoder model for machine translation.
This model is a “conditional language model” in the sense that the encoder portion (shown in green) is modeling the probability of the input sentence $$x$$.
In beam search, if you increase the beam width $$B$$, which of the following would you expect to be true? Check all that apply.
In machine translation, if we carry out beam search without using sentence normalization, the algorithm will tend to output overly short translations.
Suppose you are building a speech recognition system, which uses an RNN model to map from audio clip $$x$$ to a text transcript $$y$$. Your algorithm uses beam search to try to find the value of $$y$$ that maximizes $$P(y \mid x)$$.
On a dev set example, given an input audio clip, your algorithm outputs the transcript $$\hat{y}=$$ “I’m building an A Eye system in Silly con Valley.”, whereas a human gives a much superior transcript $$y^* =$$ “I’m building an AI system in Silicon Valley.”
According to your model,
$$P(\hat{y} \mid x) = 1.09*10^-7$$
$$P(y^* \mid x) = 7.21*10^-8$$
Would you expect increasing the beam width B to help correct this example?
Continuing the example from Q4, suppose you work on your algorithm for a few more weeks, and now find that for the vast majority of examples on which your algorithm makes a mistake, $$ P(y^* \mid x) > P(\hat{y} \mid x)$$. This suggest you should focus your attention on improving the search algorithm.
Consider the attention model for machine translation.
Further, here is the formula for $$\alpha^{
Which of the following statements about $$\alpha^{
The network learns where to “pay attention” by learning the values $$e^{
We can't replace $$s^{
Compared to the encoder-decoder model shown in Question 1 of this quiz (which does not use an attention mechanism), we expect the attention model to have the greatest advantage when:
Under the CTC model, identical repeated characters not separated by the “blank” character (_) are collapsed. Under the CTC model, what does the following string collapse to?
__c_oo_o_kk___b_ooooo__oo__kkk
In trigger word detection, $$x^{