DEV Community

Discussion on: Please Stop Using Local Storage

prodigalknight profile image

While I agree with most of your points, there are hacks or workarounds to get past most of the limitations of localStorage and sessionStorage.

At my last job, I was tasked with writing a SPA, but the person in charge of the backend was an absolute idiot who would rather re-use old, inefficient code than write new code, so I ended up using sessionStorage to store a lot of information (we worked with PHI [Protected Health Information], so this was really a terrible workaround in practice).

Since we had to work with IE10, I couldn't implement an indexedDB solution to the problem, and ran into several issues, firstly, the lack of security. In order to address this, I incorporated Crypto.js into the webapp and encrypted everything before storing it and decrypted when retrieving it. The key to access sessionStorage was created anew every time the page was refreshed, and if a key couldn't be read with the stored key, the data would be deleted and re-fetched.

This led to a second problem: some of the data I needed to store was quite large, and encrypting everything meant I had to base64 encode the result in order to stuff it into sessionStorage, which led to a 33% bloat on everything. Sometimes, this would blow out the 5MB limit. So I ended up incorporating a GZIP library to compress the data when storing and then decompress when retrieving.

This led to a third problem: GZIP isn't the fastest algorithm, and this was all being done synchronously, so when compressing/decompressing large data sets, the browser tab would freeze (we had some searches that could return 3-4MB of data). Fortunately, IE10 at least supports Dedicated Web Workers, so I was able to offload the encryption/compression and decompression/decryption to a worker thread. The worker took the data to be encrypted/compressed/decrypted/decompressed and an encryption key, and returned the result.

Due to the sheer amount of data we could end up processing in a short period of time, I also ended up writing a worker thread pool manager so that we could have 2 or more dedicated workers to do that and not cause lag spikes when doing lots of reads/writes to and from sessionStorage.

Eventually, the solution I ended up with was as follows:

  • Generate a random encryption key every time a user logs in
  • Store that key in sessionStorage using a different, fixed encryption key (insecure, but only accessible inside our own code)
  • Whenever a request that can be cached finishes, send the data off to a worker thread to be compressed, encrypted, and then compressed again (this yielded the best overall size - one 3.5MB JSON payload was compressed to a little over 220KB once)
  • Store the result in sessionStorage
  • Reverse the process to retrieve
adamwknox profile image

I hope this is a joke

prodigalknight profile image

No, that is not a joke. What part of it made you hope that it was a joke? Perhaps I can clarify.

Thread Thread
rdegges profile image
Randall Degges Author

There's a lot of security issues in the architecture you described above:

  • Using crypto in client-side JS
  • Storing an encryption key on a client
  • etc.

There are other ways to do this stuff safely, although I don't envy your situation.

In a lot of cases focusing on security isn't possible: maybe it's due to a very bad team dynamic (like back-end developers not wanting to work with you), maybe it's due to legacy issues -- but whatever it is, it isn't always feasible.

I like to keep things simple and try to focus on security for apps that require it -- and in these cases I just do the most basic straightforward thing possible.

If you ever find yourself using encryption tools manually (especially in JS) you may want to re-evaluate your goals and see if there's something simpler you can do.

Thread Thread
prodigalknight profile image

I didn't really have a choice. The users wanted the SPA to be fast, my bosses wanted it to be secure, and the backend engineer was unwilling to address his performance issues. I had to compromise a bit.