DEV Community

OdyAsh
OdyAsh

Posted on

Tokens vs Chunks

Tokens vs Chunks

When reading articles or documentations, you'll see that sometimes, "tokens" and "chunks" are treated as synonyms, but usually they represent different granularity levels. Demonstration by definitions:

 

Tokens

  • A token is the smallest unit of data that the NLP model processes, such as a sentence, a word, or a character (s1).
  • It's a way to break down and analyze text into manageable components (s2).

 

Chunks

  • A chunk is a group of tokens (s3)
  • For example, if we have this text: "Hello there! My name is OdyAsh (new paragraph) I like astronomy!", then depending on how you want to process this text, you might have one of these configurations:
    • tokens ⟺ sentences, chunks ⟺ paragraphs
    • tokens ⟺ words, chunks ⟺ sentences
    • tokens ⟺ words, chunks ⟺ group of nouns only (i.e., process the tokens so that they are grouped into chunks of noun phrases. Example: s4.
    • tokens ⟺ characters, chunks ⟺ words
    • tokens ⟺ characters, chunks ⟺ chunk size (i.e., 200 characters)
    • Examples: here: s5

So, one might treat a chunk as a unit of data which the NLP model gains useful info from (s4), and by chunking down, we get to the details of each chunk, i.e., the tokens which form this chunk (s6).

 

Summary

  • Usually:
    • A chunk: a unit of data with a low granularity level.
    • A token: a unit of data with a high granularity level.
  • Occasionally:
    • They are treated as the same thing.

 

If you have any questions/suggestions...

Your participation is most welcome! 🔥🙌

 

And If I made a mistake

Then kindly correct me :] <3

 

Sources

Top comments (0)