I was working on my project when noticed my laptop fan is spinning like before the takeoff. I checked
htop and found out there's a
node process spawned by WebStorm, and this process' CPU consumption skyrockets to 100% each time I edit a TS file. It took 10–20s for the process to finish its work and to release the CPU.
I started googling and encountered quite a few issues about TypeScript and crazy fan spinning submitted both to TypeScript and WebStorm. Unfortunately they where barely helpful so I decided to make a small research.
I asked for a suggestion on JetBrains community forum and was kindly instructed to take V8 profile. It showed me that there is some heavy 20-seconds computation triggered by
getSemanticDiagnostics is a TS language service method which analyzes a file for errors like “x is not assignable to type y”, “type x does not have property y” etc. It seems okay that WebStorm invokes it on each edit, but what is it doing that long? Is there a busy waiting or an endless loop? To understand it I decided to get my hands really dirty.
I attached to the
node process and paused it several times. There was very very long stack:
On the first sight it seems like it's too hard to find out what's going on; but actually there are things revealing general picture.
First, there's a loop iterating over all statements in the file:
Next, down the stack, there's a type inference request for a specific place in my file which is visible through
Apparently this request is executed for every single part of the file. Next, it lands onto the long recursive chain of
recursiveTypeRelatedTo() etc. which, as seen, performs the real CPU-intensive work of inferring types.
Indeed, a lot of languages can infer types, is there something special about TS? Well, I see two things:
- TS type system is exceptionally rich and powerful, way more powerful than that of Java or Scala. This also means that the size of a code that infers types is huge.
- On each edit, WebStorm calls TS Server's
- The method analyzes the whole edited file, running types inference where needed
- That type inference is very, very expensive, and furthermore seems it's not linear of the file size
Or, putting it in one short conclusion:
TypeScript is slow by nature. Its performance degrades with the file size, and the relation is likely non-linear.
Just keep files small. How exactly small depends on your computer, and what fan noise you can tolerate 😀 I personally try to stick to these limitations:
- No more than ~400 lines
- No more than ~20 imports
TS codebase grows with each release bringing us new cool features. But this also means one day even short files will burn out our CPUs! What can the TS team do about it? I'm not a sci-fi writer to predict the future, but I'll try 🤓
So, TS team can:
- Migrate TS to some compiled language. Too late perhaps, but who knows 😉
- Make TS language service able to partially analyze a file. This requires very deep integration between the service and an editor, which may possibly bring other hard problems.
- Utilize WebAssembly. It's not stable in
nodefor the moment, but this will happen one day. The language service may be split to API and computational parts, and latter may be compiled to WASM.
- Develop or adopt a tool compiling JS (TS) to some lower-level representation like LLVM or even to the native code!
I believe TS team adopts (3) or (4). Furthermore, I suppose it's partially possible even now! However, that's not the way for the webdev majority, and the TS team needs to implement some tooling to simplify the process. Yet, of course, editors need to support an optimization path which TS team chooses.
Whatever the technology is, it has limitations, and TypeScript is no exception. And we must admit, there's certainly a room for improvements! So, staying tuned for news!
Thanks for finishing this reading. If you find it interesting, please consider leaving some feedback or following me on DEV.to or Twitter. I'm new here and I'd be glad to know if this kind of stuff is helpful. Thanks.