DEV Community

Cover image for Dithering Images with React/JavaScript
Adam Nathaniel Davis
Adam Nathaniel Davis

Posted on • Edited on

Dithering Images with React/JavaScript

[NOTE: The live web app that encompasses this functionality can be found here: https://www.paintmap.studio. All of the underlying code for that site can be found here: https://github.com/bytebodger/color-map.]

OK, I lied. In the last installment, I said that this article would be about using digital color mixing to expand the depth of our color palette. But some of the comments on my previous articles made me realize that maybe it would be best to address the dithering question first.


Image description

What is dithering?

In short, dithering is a process whereby we distribute "noise" in an image across multiple pixels. But before I go further in what dithering is, it may be useful to explain what dithering is intended to fix.

A huge part of image/video manipulation involves taking an image at a very high resolution and converting it to something that has a lower resolution. If you've read any of the previous articles in this series, you'll notice that the first thing I did with my images was to pixelate them. Pixelation is one method of down-sizing the resolution.

The original intent of my Paint Map Studio program was to take an existing digital image (that presumably has millions of individual colors), and convert it into a "map" of colors that matches my inventory of paints. And I only have a few hundred distinct colors of paint. So matching a richly-detailed digital image to my inventory of (limited) paints requires, by definition, a way of down-sizing the resolution.

In the most extreme example, you may be taking a richly-detailed image and attempting to display it in pure black-and-white. That would be one of the more extreme forms of down-sizing an image's resolution. (If you're old enough to remember reading physical newspapers, the images shown in the newspapers were always dithered - because they were usually converting those images into the most extremely-limited color set: black and white.)

For example, let's imagine that you have a vertical band that represents a perfect greyscale. On the left side of the band, you have pure white. On the right side of the band, you have pure black. Along the x-axis of the band, you have a steadily-darkening band of greys. That image would look like this:

Image description

Then let's assume that you want to map this image into a 1-bit color scale. (In other words, you want to map every pixel in the image to match either white or black.) The resulting image would look like this:

Image description

This happens because everything on the left side of the image is more white than black. So when we try to match all of the pixels to our white-or-black color palette, everything on the left side is more white than black. So they get converted into... white. And every pixel on the right side is more black than white. So they get converted into... black.

But even though both images only contain two colors (white and black), the resulting image doesn't look much like the original image at all. The first image is a smooth transition from white-to-black. The second image is two starkly-different white-and-black blocks laid end-to-end.

Even when we're trying to match an image to a deeper palette of colors, this effect still happens. In the header image for this section, you can see a picture of a cat. On the left is the original image of the cat. In the middle is the cat image converted into the "closest" colors from a given palette. Notice the banding that occurs on the cat's neck, face, and ear. On the right is the same image of the cat, with dithering applied.

We saw this in the previous articles in this series. For example: Here is the original pixelated image we were working from:

Image description

And here was that same image, matched against my palette of paints, using a simple RGB calculation:

Image description

Notice that on her forehead, nose, cheeks, and hands, there are large blocks of pink / white / tan. That's because, in those entire sections, the "closest" colors that the algorithm could find were... all the same color. So the algorithm created large sections (or, bands) of similar colors.

There are multiple ways to address this problem. But one of the most common remedies is to apply dithering.


Image description

How does dithering work?

Whenever you are trying to match a color in the original image to a color in your palette, there will rarely be a perfect match. Usually, the "closest" match will be something that's at least slightly different than the original color. The difference between the original color and the closest match is called "noise".

Dithering works by distributing that noise to future pixels as you work through the image. If your target palette is very close to the colors used in the original image, there will be very little noise to distribute. If your target palette is vastly different from the colors in the original image, there will be a lot of noise to distribute.

Whenever you apply dithering, you're essentially applying a digital "smudging" to your processed image. So sometimes, the resulting image can actually appear "noisier" than if you'd applied no dithering at all. But in most scenarios, this "smudging" is a net-positive, because it removes the harsh banding that you see in images where no dithering has been used.


Image description

Floyd-Steinberg dithering

There are many different algorithms to apply dithering. When I went through my color-matching algorithms, I provided the option in my Paint Map Studio to apply many different ones. But when it comes to dithering, I've only hardcoded one option - the Floyd-Steinberg algorithm into my tool.

This wasn't done out of laziness. Although there are many ways to apply dithering, by my estimation the clearly "best" option is the Floyd-Steinberg algorithm. Whenever I've tried to apply different dithering algorithms, the result always seems to be clearly inferior to Floyd-Steinberg.

The Floyd-Steinberg algorithm works like this:

When you're parsing through your original image, you're going down through the rows (the y-axis), and then across the columns (the x-axis).

Image description

The image above shows what we're doing with the "noise" that results when we process every pixel (or... block) in an image. The center (X, Y) (in red) represents the current block we're evaluating. The surrounding blocks show the proportion of the noise (in green) that will be distributed to other blocks.

How do we calculate the "noise"? Well, in an RGB model, we have three separate numerical values for red, green, and blue. So there are actually three noise values that get calculated. There's the difference between the red in the original block versus the red in the closest color, the green in the original block versus the green in the closest color, and finally the blue in the original block versus the blue in the closest color.

For example, imagine that our original block is this color:

Image description

That color has the following values: red 86, green 97, blue 63. Then imagine that our matching algorithm determines that Golden: Chromium Oxide Green is the closest color in our palette. Golden: Chromium Oxide Green looks like this:

Image description

Golden: Chromium Oxide Green has the following values: red 64, green 91, and blue 34. This means that the "noise" generated from this block would be equal to:



const noise = {
  red: 86 - 64,
  green: 97 - 91,
  blue: 63 - 64,
}


Enter fullscreen mode Exit fullscreen mode

Therefore, the total "noise" from this block would be:



const noise = {
  red: 22,
  green: 6,
  blue: -1,
}


Enter fullscreen mode Exit fullscreen mode

When we distribute this noise to other blocks, the block directly to the right of the target block ((X+1, Y)) would become:



const noise_XPlus_1_Y = {
  red: originalRedValue + (22 * (7/16)),
  green: originalGreenValue + (6 * (7/16)),
  blue: originalBlueValue + (-1 * (7/16)),
}


Enter fullscreen mode Exit fullscreen mode

The block to the lower-left corner of the target block ((X-1, Y+1)) would become:



const noise_XMinus_1_YPlus_1 = {
  red: originalRedValue + (22 * (3/16)),
  green: originalGreenValue + (6 * (3/16)),
  blue: originalBlueValue + (-1 * (3/16)),
}


Enter fullscreen mode Exit fullscreen mode

The block directly below the target block ((X, Y+1)) would become:



const noise_X_YPlus_1 = {
  red: originalRedValue + (22 * (5/16)),
  green: originalGreenValue + (6 * (5/16)),
  blue: originalBlueValue + (-1 * (5/16)),
}


Enter fullscreen mode Exit fullscreen mode

And finally, the block to the lower-right of the target block ((X+1, Y+1)) would become:



const noise_XPlus_1_YPlus_1 = {
  red: originalRedValue + (22 * (1/16)),
  green: originalGreenValue + (6 * (1/16)),
  blue: originalBlueValue + (-1 * (1/16)),
}


Enter fullscreen mode Exit fullscreen mode

This is fairly easy to implement because you're never pushing the noise to blocks that have already been painted. The blocks above the target block, and the block directly to the left of the target block (which have already been calculated), remain unchanged. You're only ever pushing the noise to the blocks that haven't yet been calculated.


Image description

Rendering the dithered image

Now that we know how to calculate Floyd-Steinberg dithering, it's time to apply it to our color-matched image.

Here is our updated pixelate() function:



const pixelate = () => {
   const { height, width } = canvas.current;
   const stats = {
      colorCounts: {},
      colors: [],
      map: [],
   };
   const { blockSize, matchToPalette } = indexState;
   loadPalettes();
   let noise = {};
   for (let y = 0; y < height; y += blockSize()) {
      const row = [];
      for (let x = 0; x < width; x += blockSize()) {
         const remainingX = width - x;
         const remainingY = height - y;
         const blockX = remainingX > blockSize() ? blockSize() : remainingX;
         const blockY = remainingY > blockSize() ? blockSize() : remainingY;
         let averageColor = calculateAverageColor(context.current.getImageData(x, y, blockX, blockY));
         averageColor = applyDithering(noise, averageColor, x, y);
         let referenceColor = {
            blue: averageColor.blue,
            green: averageColor.green,
            red: averageColor.red,
            name: '',
         };
         const closestColor = matchToPalette() ? getClosestColorInThePalette(referenceColor) : averageColor;
         row.push(closestColor);
         noise = recordNoise(noise, averageColor, closestColor, x, y);
         context.current.fillStyle = `rgb(${closestColor.red}, ${closestColor.green}, ${closestColor.blue})`;
         context.current.fillRect(x, y, blockX, blockY);
      }
      stats.map.push(row);
   }
   return stats;
};


Enter fullscreen mode Exit fullscreen mode

There are two differences between this function and the one I showed in previous articles:

  1. After we calculate the averageColor for the block, we're then passing it into an applyDithering() function.

  2. After we determine the closestColor, we're then tabulating the "noise" with a recordNoise() function.

The recordNoise() function looks like this:



const recordNoise = (noise = {}, color1 = {}, color2 = {}, x = -1, y = -1) => {
   const { blockSize, dither, matchToPalette } = indexState;
   const block = blockSize();
   if (!dither() || !matchToPalette())
      return noise;
   const redError = color1.red - color2.red;
   const greenError = color1.green - color2.green;
   const blueError = color1.blue - color2.blue;
   const noiseObject = {
      red: 0,
      green: 0,
      blue: 0,
   }
   if (!Object.hasOwn(noise, y))
      noise[y] = {};
   if (!Object.hasOwn(noise, y + block))
      noise[y + block] = {};
   if (!Object.hasOwn(noise[y], x + block))
      noise[y][x + block] = {...noiseObject};
   if (!Object.hasOwn(noise[y + block], x - block))
      noise[y + block][x - block] = {...noiseObject};
   if (!Object.hasOwn(noise[y + block], x))
      noise[y + block][x] = {...noiseObject};
   if (!Object.hasOwn(noise[y + block], x + block))
      noise[y + block][x + block] = {...noiseObject};
   noise[y][x + block].red += redError * 7 / 16;
   noise[y][x + block].green += greenError * 7 / 16;
   noise[y][x + block].blue += blueError * 7 / 16;
   noise[y + block][x - block].red += redError * 3 / 16;
   noise[y + block][x - block].green += greenError * 3 / 16;
   noise[y + block][x - block].blue += blueError * 3 / 16;
   noise[y + block][x].red += redError * 5 / 16;
   noise[y + block][x].green += greenError * 5 / 16;
   noise[y + block][x].blue += blueError * 5 / 16;
   noise[y + block][x + block].red += redError / 16;
   noise[y + block][x + block].green += greenError / 16;
   noise[y + block][x + block].blue += blueError / 16;
   return noise;
}


Enter fullscreen mode Exit fullscreen mode

The whole point of this function is to keep a running total of the noise that should be applied to any future blocks.

And this is the applyDithering() function:



const applyDithering = (noise = {}, color = {}, x = -1, y = -1) => {
   if (!dither() || !matchToPalette())
      return color;
   const ditheredColor = {...color};
   if (Object.hasOwn(noise, y) && Object.hasOwn(noise[y], x)) {
      ditheredColor.red += noise[y][x].red;
      ditheredColor.green += noise[y][x].green;
      ditheredColor.blue += noise[y][x].blue;
      if (ditheredColor.red > 255)
         ditheredColor.red = 255;
      if (ditheredColor.red < 0)
         ditheredColor.red = 0;
      if (ditheredColor.green > 255)
         ditheredColor.green = 255;
      if (ditheredColor.green < 0)
         ditheredColor.green = 0;
      if (ditheredColor.blue > 255)
         ditheredColor.blue = 255;
      if (ditheredColor.blue < 0)
         ditheredColor.blue = 0;
   }
   return ditheredColor;
}


Enter fullscreen mode Exit fullscreen mode

This function takes the noise that's been recorded from previous blocks and applies to it to the current block. So let's see what kind of results this produces.

This was our original image calculated with the RGB algorithm:

Image description

And this is the same algorithm, with dithering applied:

Image description

As you can see, there's a lot of "noise" in the image, but that noise is evenly distributed, which makes the overall image look "smoother". The bands of pink/tan in her face are now spread out in a way that feels more natural.

The dithering is particularly noticeable in the background. The original image has distinct bands of tan/grey. But the dithered image has a mix of interspersed colors.

Let's see how it performs with other algorithms as well. Here's the original CMYK image:

Image description

And here's the same image with dithering applied:

Image description

This one's pretty interesting to me because it highlights how dithering can be both a net-good and a net-bad, depending on your perspective. On one hand, the blotchiness of the original is gone. On the other hand, it's probably too noisy. For example, her hair now looks like it's strewn with confetti.

Here's the original HSL image:

Image description

And here's the same image with dithering applied:

Image description

Whoa... What? Honestly, I don't know why dithering makes the HSL conversion look so bad. I'll need to look at that again some night when I'm bored.

So let's just move on to the XYZ algorithm. This was our original:

Image description

And here's the same image with dithering applied:

Image description

This one is also interesting to me because you could honestly argue whether the original or the dithered image is "better". The original is fairly smooth with many tan tones on her cheeks and nose. The dithered image is also fairly smooth, but the tans have been largely replaced by pinks. Also, notice that the dithering on this one creates a green outline on her left side.

Finally, here's the original Delta-E 2000 image:

Image description

And here's the same image with dithering applied:

Image description

This is a fairly clear upgrade over the original, but like the dithered CMYK image, it's still a bit noisy for my tastes.


Image description

In the next installment...

So we've learned that dithering can be a powerful tool for "smoothing out" images and removing color banding. But like many other factors in color/image manipulation, it can sometimes be debatable whether the resulting transformation is "better".

Also, this is hardly the end of what we can do to perform effective color matching. As I discussed a few articles ago, we're still somewhat handcuffed by our limited palette. Although 200+ colors feels like a large selection, it's still a pittance compared to the wealth of colors we see when looking at a digital image of a complex subject - like a human face. If we want to get a more accurate transformation, we'll need to look at manipulating the inventory of colors at our disposal.

In the next installment I'll show how to mix paints virtually so you can determine the best palette for your image.

Top comments (0)