DEV Community

Cover image for Invalidating JSON Web Tokens (JWT) the Right Way
José Pablo Ramírez Vargas
José Pablo Ramírez Vargas

Posted on • Originally published at webjose.hashnode.dev

Invalidating JSON Web Tokens (JWT) the Right Way

Originally published @ hashnode.

Heredia, Costa Rica, 2022-12-10

Series: JWT Diaries, Article 1

NOTE: This is basically what I explained already in a previous post, but I made it a formal article series. So dev.to is not left out with an incomplete set of articles, here is this one. Wait for the next article (being uploaded shortly after this one) for the full demonstration in NodeJS + Express.

So it came to my attention that many people don't know this way of invalidating JWT's. It seems that most people believe the only approach is to blacklist JWT's. It is not. In fact, this is probably the most cumbersome way of invalidating tokens. Let's see about the cons of this seemingly popular, yet incorrect approach:

  1. Token sizes can vary wildly, and some tokens may grow a lot. Token sizes depend on the payload, as you probably know, and depending on the data included in the payload, you can get some really big tokens. This makes your blacklist database table potentially large. And for what? Tokens. Not business data. Tokens.

  2. You should never save a token. Blacklisted tokens are under all lights, valid tokens. Just delete them from the blacklist and the token is back in business. So, saving tokens is a major security risk.

  3. What happens if you want to invalidate a token for a specific user, but don't have that user's token? Then you cannot blacklist it. So let's save all issued tokens? No. Refer to the previous point: Never save tokens. Ok, so pick it up from structured log records? Still No! It is still saving the token!

  4. Since tokens are pretty much in infinite supply, your blacklist table will grow with no bounds.

So let's see about alternatives.

Alternative 1: Assign Keys to Tokens

This is basically the same thing: Have a blacklist table in your database, and instead of saving the token, just save its unique key.

This alleviates the table size and the fact that you are no longer saving tokens. The other problems, however, remain, not to mention that you must grow your administrative work around tokens. Now you must make sure you create unique 8 or 16-byte identifiers for issued tokens and persist these ID's somewhere. You'll probably want to associate those identifiers with the user, so you can identify token ID's based on a user search. All this you must implement just to keep blacklisting tokens.

If you would like to follow this path, be my guest, but I personally think is not needed. Alternative #2 will become your best friend.

Alternative 2: Invalidate by the Token's "Issued At" Claim

All JWT's have the iat claim, named issued at. The best mechanism is to enforce the blacklist as a dictionary collection of user identifiers and a minimum issued at value. This technique guarantees the invalidation of all tokens issued before the specified minimum value. Let's enumerate the benefits:

  1. No tokens are saved anywhere.

  2. Tokens do not require a unique key assigned to them.

  3. No extra effort is required around the creation of tokens because the iat claim is a standard claim found in every JWT.

  4. The blacklist table does not grow without bounds. This is because only one entry per user is needed. If a user goes through the token invalidation process, only the most recent record is needed because chronologically speaking, the most recent date covers for any past dates of the previous invalidation processes. The blacklist table's maximum size will match the total number of users and will not grow any further.

  5. It can be predicted when a blacklist entry can be discarded when taking into account the token's time to live. Any blacklist entry whose minimum iat plus the token's time to live is less than the current time can be safely deleted.

  6. Invalidating globally every token ever issued is as simple as creating a single blacklist entry with no user association.

  7. When invalidating globally, all existing per-user blacklist entries in the table can be deleted because the global entry will cover for those, maintaining the table size very small.

NOTE: It is worth noting that we care about the number of records in a blacklist because this blacklist will be constantly queried, pretty much on every API request, and in order to have a low response time, querying the blacklist table should perform very fast. The more records, the more time spent, and this time is passed on to the final response time.

Demonstrating Alternative #2

Ok, we have covered the theory around the topic of token invalidation. Now let's focus on implementing this. The next articles in this series will demonstrate implementation in NodeJS Express and .Net 6. The one for NodeJS is already out there, so what are you waiting for?

Happy coding!

Top comments (0)