From 2a56eb08d9df013da6b16c41e24b3ae9e9fc3a55 Mon Sep 17 00:00:00 2001 From: Aaron Schumacher Date: Tue, 5 Jul 2016 14:25:47 -0400 Subject: [PATCH] typo: "Lets" -> "Let's" --- _posts/2015-05-21-rnn-effectiveness.markdown | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2015-05-21-rnn-effectiveness.markdown b/_posts/2015-05-21-rnn-effectiveness.markdown index 7b4190654..a81436ec4 100644 --- a/_posts/2015-05-21-rnn-effectiveness.markdown +++ b/_posts/2015-05-21-rnn-effectiveness.markdown @@ -93,7 +93,7 @@ A more technical explanation is that we use the standard Softmax classifier (als Notice also that the first time the character "l" is input, the target is "l", but the second time the target is "o". The RNN therefore cannot rely on the input alone and must use its recurrent connection to keep track of the context to achieve this task. -At **test time**, we feed a character into the RNN and get a distribution over what characters are likely to come next. We sample from this distribution, and feed it right back in to get the next letter. Repeat this process and you're sampling text! Lets now train an RNN on different datasets and see what happens. +At **test time**, we feed a character into the RNN and get a distribution over what characters are likely to come next. We sample from this distribution, and feed it right back in to get the next letter. Repeat this process and you're sampling text! Let's now train an RNN on different datasets and see what happens. To further clarify, for educational purposes I also wrote a [minimal character-level RNN language model in Python/numpy](https://gist.github.com/karpathy/d4dee566867f8291f086). It is only about 100 lines long and hopefully it gives a concise, concrete and useful summary of the above if you're better at reading code than text. We'll now dive into example results, produced with the much more efficient Lua/Torch codebase.