Natural Language Processing commonly known as NLP to the Machine Learning experts is a field that is rapidly evolving in the present times. With the advent of AI bots like Siri, Cortana, Alexa, and Google Assistant the use of NLP has increased many folds. People are trying to build models that can better understand human languages like English, Spanish, Mandarin, Hindi, Japanese, etc which are formally known as Natural Languages.
The most common uses of Natural Language Processing in our daily life are Search Engines, Machine translation, Chatbots, and Home assistants.
Defining the two terms Natural Language and Natural Language Processing in a much more formal way.
Natural language is a language that has developed naturally in humans.
Natural Language Processing (NLP) is the ability of a computer program to understand human languages as it is spoken. The ultimate objective of NLP is to read, decipher, understand, and make sense of the human languages in a manner that is valuable.
NLP basically has two most important parts
1.Natural Language Understanding
2.Natural Language Generation
Natural Language Understanding means that a machine learning or deep learning model is able to understand the language spoken by humans. In other words, the system is able to comprehend the sentences spoken or written by us. It can be used to solve many real-world problems like Question-Answering, Query resolution, Sentiment Analysis, Similarity detection in texts, and Chatbots. If a system is able to understand the natural language then only it is able to reply to our answers.
Natural Language Generation is the ability of a machine learning model to generate output in the form of text or audio which is similar to human-comprehensible language. In this task, we generate sentences from predefined text datasets using the model. It is used for summarization of text, replying to queries or questions, machine translation(translation from one language to another), and generation of answers.
In the past two to three years many advances have been made in the field of NLP. This has been possible due to increased resources in the form of large text datasets, Cloud platforms for the training of large models, the need of humans to communicate with computers in the language understandable by both. But the most important factor is the discovery of transformers and its architecture and the use of Transfer Learning in the field of NLP.
Now, the models are pre-trained on large datasets and then this pre-trained model with its parameters or weights adjusted is used to solve the required task. This process of using pre-trained models to solve actual problems is known as transfer learning. The pre-trained model is fine-tuned to do tasks like text classification, part-of-speech tagging, named entity recognition, summarization of text, and question-answering, etc. Some of the terms may be quite unknown to the people who are new to the field of machine learning or NLP feel free to ask about them in the comments section or just google them out for better understanding and deep-diving in the field of NLP.
Some of the recent advances in the field of Natural Language Processing are given below
"Attention is all you need" this was a research paper published by Google employees. Ashish Vaswani et. al. published this paper which revolutionized the NLP industry. It was the first time the concept of transformers was referenced. Before this paper, RNN and CNN were used in the field of NLP but they had two problems
- Dealing with long term dependencies
- No parallelization during training
RNNs were not able to deal with long-term dependencies even with different improvements like Bidirectional RNNs or LSTMs and GRUs. Transformers with self-attention came to the rescue of these problems and made a breakthrough in NLP. It was state-of-the-art for seq2seq models which are used for language translation.
The other most important development was the use of transfer learning in the field of NLP. This language model introduced the concept of transfer learning to the NLP community. It is a single universal language model fine-tuned for multiple tasks. The same model can be fine-tuned to solve 3 different NLP tasks. AWD-LSTM forms the building block of this model. AWD stands for Asynchronous Stochastic Gradient Descent(ASGD) Weight Dropped.
It uses the concept of both the above-mentioned advancements i.e. transformers and transfer learning. It does full bidirectional training of transformers. It is a SOTA(state-of-the-art) model for 11 NLP tasks. It is pre-trained on the whole English Wikipedia dataset consisting of almost 2.5 billion words.
This model outperformed even BERT in Language Modeling. It also resolved the issue of context fragmentation which was faced by the original Transformers.
The official site defines it as -
StanfordNLP is a Python natural language analysis package. It contains tools, which can be used in a pipeline, to convert a string containing human language text into lists of sentences and words, to generate base forms of those words, their parts of speech and morphological features, and to give a syntactic structure dependency parse.
It contains pre-trained neural models for 53 human languages. Thus increasing the scope of NLP to a global level instead of being constricted to just English.
GPT-2 stands for “Generative Pretrained Transformer 2” as the name suggests it is basically used for tasks concerned with the natural language generation part of NLP. This is the SOTA model for text generation. GPT-2 has the ability to generate a whole article based on small input sentences. It is also based on transformers. GPT-2 achieves state-of-the-art scores on a variety of domain-specific language modeling tasks. It is not trained on any of the data specific to any of these tasks and is only evaluated on them as a final test; this is known as the “zero-shot” setting.
It uses Auto-regressive methods for language modeling instead of Auto-encoding used in BERT. It uses the best features of both BERT and TransformerXL.
The folks at Hugging face have created a miracle by making PyTorch Transformers through which we can use BERT, XLNET, and TransformerXL like SOTA models with a few lines of Python code.
The Chinese search giant Baidu made this model and it has the feature of continual pre-training. It is a pre-trained language understanding model that achieved state-of-the-art (SOTA) results and outperformed BERT and the recent XLNet in 16 NLP tasks in both Chinese and English.
It is FacebookAI's improvement over BERT. The development team at FacebookAI optimized BERT's training process and hyperparameters to achieve this model.
It is a PyTorch transformer for language processing. It is also used for the deployment of transformers. spaCy is used along with PyTorch to build the Transformers.
Multilingual language model consisting of almost 100 languages. It is SOTA for cross-lingual classification and machine translation.
It is the advanced version of the StanfordNLP and supports 66 languages. Stanza features a language-agnostic fully neural pipeline for text analysis, including tokenization, multi-word token expansion, lemmatization, part-of-speech and morphological feature tagging, dependency parsing, and named entity recognition.
Further reading for the better understanding of the topics mentioned above :-