DEV Community

loading...
Cover image for Canvas color sampling

Canvas color sampling

dasdaniel profile image Daniel Poda πŸ‡¨πŸ‡¦ ・3 min read

I was reminiscing recently; going through some of my old codepens and noticed that the performance has degraded substantially. These pens had couple things in common. They were using particles and noise fields. After some investigation, I found that disabling color sampling made the performance issue disappear.

So, what was the problem?

I was using getImageData to do color sampling.

Here is the bad code:

/**
* @params {number} x x-coordinate
* @params {number} y y-coordinate
* @params {CanvasRenderingContext2D } ctx
* @returns {array} [r, g, b, a] where each value is [0..255]
*/
function getColor (x, y, ctx) {
  return ctx.getImageData(x, y, 1, 1).data;
}

Back when my browser was not using hardware-accelerated canvas this worked fine. Now, however, by having the browsers use the GPU, there is a round-trip penalty added to the calculation. When I'm doing the noise field color sampling on hundreds of particles, the computer is spending more time sending and receiving data from the GPU than doing any calculation. As a result, I've had code that worked reasonably well a couple years ago be degraded to almost a screeching halt by a performance improvement of the browsers.

After finding this out, I've switched to a slightly more cumbersome way of getting the color value at the pixel.

So what's the solution?

/**
* @params {number} x x-coordinate
* @params {number} y y-coordinate
* @params {ImageData} imageData
* @returns {array} [r, g, b, a] where each value is [0..255]
*/
function getColor (x, y, imageData) {
  var i = (x >> 0 + (y >> 0 * imageData.width)) * 4;
  var data = imageData.data;
  return [data[i], data[i+ 1], data[i+ 2], data[i+ 3]];
}

Note that instead of the context (ctx), the function now needs imageData to be passed.

var canvas = document.createElement('canvas'); // or select from document...
var ctx = canvas.getContext('2d');
var imageData = ctx.createImageData(canvas.width, canvas.height);

Instead of using the x and y coordinates, the data is found in a Uint8ClampedArray representing the image data. This data structure is an array of integers that can be a value from 0 to 255. They are organized in a horizontal order, each pixel being represented by a sequence of 4 values, one for each channel (r,g,b,a). This data structure has no information about the size of the canvas, so the canvas size is provided to calculate the location of the relevant data.

So to look up a value, we get the x position and add the y position multiplied by the canvas width, and then multiply the sum by 4 (for 4 channels).

I've also rounded the numbers down using a bit shift n << 0. This has the same effect as Math.floor(n). The reason is that if a decimal number is passed, the index is going to be decimal, and not only would that cause an error, since the index has to be a whole number, but if the number was rounded later, it would yield inaccurate location.

So what's moral of the story?

By pushing the getImageData function to the GPU, the bad color sampling implementation code from the past was exposed and caused a degradation of performance. Also, in spite of what you may read on the google search results, getImageData absolutely is hardware accelerated today on all modern browsers. I've tested it by reverting the code, and it performs better by disabling the hardware acceleration. The performance profiling tools indicate a huge load on the GPU.

Any shameless plug?

no before links, but here are some of the updated codepens

Discussion (0)

pic
Editor guide