DEV Community

Cover image for Great Talks: Jest Architecture Overview
Jonas Schumacher
Jonas Schumacher

Posted on

Great Talks: Jest Architecture Overview

❓ This post is the first in a series in which I will recommend and discuss talks on programming topics which I liked and found interesting. The idea is not to provide a detailed summary, but to draw your attention to and hopefully make you watch the talk itself, and then provide a few of my own observations and highlights as well.

The Talk

We'll kick this series off with a talk I watched recently. Jest Architecture Overview by Christoph Nakazawa goes over the architecture of the Jest testing framework at a really well chosen level of detail. Not too high, not too low, just right to get a better understanding of, and feeling for, what is actually going on when you run Jest.

Notes & Observations

  • The lo-fi presentation is actually really refeshing and makes this feel like an architecture session you would do with your colleagues
  • Whiteboard might actually beat out Powerpoint for technical topics, probably because it doesn't feel like you need to dumb anything down to fit the medium
  • Also, there's a lot more meaningful interaction between the speaker and the text
  • Basically, this has shown me how much unreaped potential there is for technical talks to give you real, instead of merely felt, insight
  • Nakazawa talks a lot about the things Jest does purely to deliver a better user experience
  • Jest does a lot of stuff to prioritize which tests to run first: e.g. it maintains a cache with tests and their results and will run tests that failed during the previous run first
  • The most basic heuristic they use for this is file size, this breaks down when a test imports a lot of stuff though
  • Over 50% of the work done during the firt run might actually be transforming the source, these results are cached though, which is why subseqent runs might be a lot faster
  • Transforms of code are done very late in the process and only when the code is needed. This isn't done earlier because this static analysis would be hard to do reliably and might actually waste a lot of time
  • Long-running tests are prioritized by the scheduler so they finish roughly when the smaller tests are finished as well. The idea here is to use one of the available cores or threads for such a long running process, while the others run the smaller ones. (Here, it isn't made clear what happens if there are a lot of long running tests. Will they occupy the available resources until there aren't enough long running tests to occupy all cores anymore? Or is the scheduler "smarter" than that? Would it actually be smarter at all to keep a core free, even if there are a lot of slow tests?)
  • Jest is made up of a lot of small packages with very specific tasks
  • These might actually be swapped out individually via the config if your usecase calls for it
  • There's a lot going on, but there also seems to be a lot of change over time, with more and more stuff being split-up, replaced etc. Nothing comes out perfectly on first try.
  • When it comes down to it, it's really all about the basics: walking a directory, filtering out files, importing code etc. These things are then combined into a framework. Jest's architecture makes this very explicit
  • Naming is hard for everyone

Oldest comments (0)