DEV Community

Cover image for Choosing the right tools to test a visualization library
linkurious-dev
linkurious-dev

Posted on

Choosing the right tools to test a visualization library

Stories from the trenches

At Linkurious, we’ve designed Linkurious Enterprise, a platform that leverages the power of graphs and graph visualizations to help analysts and investigators around the globe fight financial crime.

One of the main features of Linkurious Enterprise is a user-friendly graph visualization interface aimed at non-technical users.
In 2015, unhappy with the state of JavaScript graph visualization libraries, we started developing our own: Ogma.

Ogma is a Javascript library we built that is focused on network visualization: you may have seen networks visualized before in Javascript with other tools like D3.js or Sigma.js , but for us it was very important to enable some specific feature that were not available yet in other libraries, hence the creation of the Ogma visualization library from the ground up.

The problem

As part of our journey while developing our graph visualization library, we encountered many challenges. One of these challenges is: what is the best way to test a visualization library?

Testing visualization libraries is important because the complexity of the codebase is very high: several algorithms, multiple renderers and a vast API surface makes it very hard to keep things simple enough to be managed without an automated system.
Given this complexity, it is non-trivial to come up with a testing solution for all the many aspects of the visualization library, given the amount of things to test and the available resources for it.

Our solution

We think that testing a library is not just about testing the library itself, but testing the whole experience of using it as a developer:

  • coding with the library itself
  • reading the documentation
  • using the working examples

After several iterations, we ended up with a mix of approaches that we think works great:

Unit tests

How many times have you been told to write unit tests? So it’s not a surprise to hear that we’re using them I guess.

The interesting thing is that we are not 100% focused on these: writing unit tests is pretty expensive as it requires to test in isolation each single bit of the library in multiple scenarios, with great use of time and resources.

Because of that, we’re using a more pragmatic approach instead as Guillermo Rauch puts it:

Integration tests

Integration tests are not so different from unit tests, some people even consider them unit tests as well. The biggest difference from unit tests is that Integration tests are run against the external API of the library rather than to specific modules.

This approach leads to testing a wider spectre of code and it is the closest thing from a library point of view, to check and keep control of bugs from the developer experience.

Developers using the library are going to see only the API as a gateway to the state and behaviour for their goal: that is why we want to really stress as much as we can this side to catch both new bugs and regressions before the releasing process.

For this reason the Ogma API is covered as much as possible and the coverage tool here is fundamental to see if all paths are reached before going into the rabbit hole of inner modules - where unit tests are kicking in for specific and edge case testing.

Mind cross-browser compatibility

While Ogma can run also in a Node.js process, the fact that it is a visualization library makes it extra important that it works cross-browser flawlessly. That’s why all integration tests are run against a wide set of browsers and operating systems: from Internet Explorer 11 on Windows 7 to the latest Chrome version on MacOS.

Tools:

  • mocha.js (in combination with chai.js) for both NodeJS and cross-browser environments
  • Browserstack (or any alternative is good as well)
  • nyc for code coverage tool

Process:

Also these tests are run for every commit on PRs on each repository within the CI for both NodeJS and cross-browser environments.

A pre-push hook is in place to run it locally in NodeJS to prevent developers to break things while pushing it upstream.
The .only check is also in place here.

Rendering tests

Once we check that all code is good it should be enough to release right? Well, not yet.

The most important feature of a visualization library is its… rendering output.

We can test rendering instructions sent to the rendering engine and validate that, but the final result is something that can not simply be spot from logs or code.

That’s why rendering tests becomes important to support the QA of the library and reduce the amount of regression bugs on the library. Ogma provides three different rendering engines (SVG, Canvas and WebGL) and each browser has its own quirks for each that we need to spot before releasing a new version.

In this context tools like puppeteer or selenium-like comes very handy to quickly put together visual regression tools: a test is a Web page with a network visualization with specific attributes which gets rendered and exported as image, then diff’d with some reference images.

Tools:

Process:

  • These tests are run at every commit on PRs on the CI.
  • Because rendering engines may differ, tests and reference images are different for each platform.

Documentation examples testing

Here’s at the end of our post and covering the most undervalued side of testing: resources given to the developer, other than the library!

Documentation is hard to test automatically, there are probably tools out there that are smart enough or leveraging some sort of NLP to verify documentation text, but in our case it’s too complex to handle it right now: a human reading and checking is still the best we can do for text, we can actually check the types and definitions, combining this with Typescript definition file.

Often for the TS definition file, it is required to expose only those types necessary to interact with the API, so some internal types are stripped out of the definitions and an integrity check is performed on it to verify the consistency.

Another side of the documentation are examples, often used by developers to better understand the features of the library: it is very important that these examples don’t break and that developers are able to run them locally.

Tools:

  • puppeteer for examples checking (primitively checks for exceptions thrown)
  • Typescript + JSDoc to generate the right signature types
  • tsc for definition file integrity check

Process:

  • Examples tests are run at every commit on PRs on the CI.
  • Definition files are checked both on pre-commit hook and on the CI at each commit.
  • The codebase has been recently ported from Javascript to Typescript so signature check is permanent during development - probably a theme for another blog post in the future ;)

Conclusions

The different approaches of each type of testing helped to span across the whole developer experience, from the documentation to the effective library rendering results providing an effective tool to detect and handle breaking changes before they could reach the release script.

Creating the right mix of tests has been a long effort of the team during the past years, which ended up consolidating recently with pre-hooks and a re-designed CI flow to provide quick feedback.

The mix of different levels of testing helped us tame the complexity of the library, reducing the number of bugs that we receive, preventing the number of regressions and increasing the speed for enhancements, with the resources available in the company.

We hope you’ve enjoyed the blog post and got some value out of it. If you have any question, suggestion or comment, please let us know. And remember we’re always looking for nice people who like to test, so let us know if you like to write code and test it!

Top comments (0)