Truly understanding Recurrent Neural Networks was hard for me. Sure, I read Kaparthy's oft-cited RNN article and looked at a diagrams like:
But that didn't resonate in my brain. How do numbers "remember"? What details are lurking in the simplicity of that diagram?
To understand them better, I built one in a spreadsheet. It was not as straightforward as my previous attempts to build neural networks this way, mostly because I had to discover novel ways to visualize what was going on:
Those visualizations really helped RNNs click for me, and I was able then to implement one and figure out the weights to make it work. I walk through that entire process in this YouTube video.
More of my work: