DEV Community

José Thomaz
José Thomaz

Posted on

How NLP works under the hood?

Natural Language Processing (NLP) is a subfield of artificial intelligence that deals with the interaction between computers and human language. It involves enabling computers to read, understand, and generate human language in a way that is both meaningful and useful.

1. Tokenization

Tokenization is the process of breaking down text into individual words os subwords, called tokens. In some NLP models, this tokens can be even minor, they are called "subword units" such as:

  • character n-grams
  • byte pair encodings

The goal of tokenization is to create a standardized representation of text that can be processed by the model. For example, if you have syllabs in common between a word, they can use the same token.

2. Embedding

Once the text is tokenized , the tokens are transformed into numeric vectors called embeddings. This transformation is made with linear algebra operations, like multiplicaton and addition of matrices. These embeddings represent the meaning of the words and subwords.

3. Contextualization

In many NLP tasks, it is essential to understand the context in which words are used. Modern NLP models like GPT-4 employ Transformer architectures to do this. Transformers use self-attention mechanisms to model the relationships between words in a sentence, allowing the model to capture context-dependent meanings and long-range dependencies.

4. Language modelling and fine-tuning

Language modeling is a fundamental task in NLP, where the objective is to predict the next word in a sequence, given the previous words. It involves learning the probabilities of word sequences and helps capture the syntax and semantics of a language. Pre-trained language models like GPT-4 are trained on massive amounts of text data using unsupervised learning.

The training process typically involves a masked language modeling objective, where a portion of the input tokens are masked, and the model must predict the masked tokens based on the context provided by the unmasked tokens. This unsupervised pre-training allows the model to learn general language features and generate meaningful embeddings for tokens in various contexts.

Once the pre-training is complete, the model can be fine-tuned for specific tasks, such as machine translation, sentiment analysis, or text classification, using smaller labeled datasets. During fine-tuning, the model is trained using supervised learning, where the input-output pairs are provided by the labeled dataset. Fine-tuning allows the model to adapt its pre-trained knowledge to the specific task.

5. Decoding

Decoding is the process of generating an output sequence or making predictions using the trained model. In the context of NLP, decoding typically involves converting the model's output probabilities into actual words or tokens.

Top comments (0)