Autoencoders are a special kind of neural networks used in unsupervised machine learning, these networks try to learn an efficient representation of a dataset, these representations are called codings and can be differentiated from the original input in that they usually have a much smaller dimensionality. Autoencoders are used in all kinds of tasks; such as music, text, and image generation. They can also be used for dimensionality reduction or collaborative filtering recommendation systems . Autoencoders are essentially very powerful feature detectors. One of the most remarkable differences with this type of network is that to the inputs (features) and outputs (labels) are both features. For this reason, it is very common to see symmetrical autoencoders with tied weights.
This type of networks can sometimes be tricky to train and are very prone to overfitting, as a result many different variations have been developed.
Denoising
One of the most simple alternatives is to use a Denoising Autoencoder; this kind of network is made up of a simple autoencoder but the difference is the introduction of some kind of noise to the input data. This way the network has a harder time filtering the data and cleaning it. Although some randomness is introduced to the training.
Variational
This category of neural network is very similar to denoising networks, but they are probabilistic neural networks, so they introduce randomness even after training. This is due to its internal structure. Variational autoencoders apply a gaussian transformation to the codings. This transformation enables to autoencoder to generate new dataset samples, this is called a generative neural network.
Generative adversarial networks
Generative adversarial networks or GANs for short, have been getting a lot of attention since their release 2014. They are famously unstable but have been shown to generate very good generative data. This autoencoder variation works by creating two networks, a discriminator and a generative network. The generative network works like any other autoencoder trying to generate new data, and the discriminator works like a categorization neural network that tries to distinguish between images from the original dataset and fake images generated with the generator network.
Other
There are a lot of autoencoders out there, and there is surely one that fits your needs, other autoencoder examples might be
- Sparse AE
- Contractive AE
- Stacked convolutional AE
- Winner Take it All
- Generative stochastic network
As a personal favorite a really like this NVIDIA generative autoencoder.
Top comments (3)
Hello Pol Monroig Company,
thank you for your article.
I like the short but informative format.
I also like the video "NVIDIA generative autoencoder" as it highlights the power of the generative autoencoder.
Great post @polmonroig would love to see more frequent posts from you on Machine Learning.
Love to hear you liked it! I am looking forward to writing more. 😁