DEV Community

Cover image for Determining the RGB "Distance" Between Two Colors

Determining the RGB "Distance" Between Two Colors

Adam Nathaniel Davis on February 24, 2023

[NOTE: The live web app that encompasses this functionality can be found here: https://www.paintmap.studio. All of the underlying code for that si...
Collapse
 
cubiclesocial profile image
cubiclesocial

First, when working with colors, you never really want to work in RGB. You'll wind up in the wrong place really quickly as you alluded to at the end of the article. RGB is convenient for efficient use on active displays (CRTs, LCD, OLED) but not much else. Instead, convert to a minimum of HSB/HSV (Hue, Saturation, Brightness/Value) and then figure out the distance between two HSB/HSV colors and finally convert back to RGB for storage/display purposes. That will generally produce much more color-accurate results. HSB/HSV is one of the easiest color conversions to implement and also tends to be the least computationally expensive. As a side note, the Photoshop color picker dialog is my goto to determine if color conversions are "correct" to within a reasonable margin of error.

Second, fixing pixelated images for small palettes is a "mostly solved" problem. Dithering is generally the solution used when working with a limited color palette. Dithering basically takes a delta of an error value and then applies the error to subsequent pixels as each palette index is selected. Dithering produces a more technically accurate approximation of the original image at the cost of increased image size. Whether dithering actually "looks good" is much more subjective.

Color is an endless maze of discovery (and rediscovery) and the rabbit hole runs deep. You can spend years working on color and still never discover the end.

Collapse
 
bytebodger profile image
Adam Nathaniel Davis • Edited

One more note on this:

Been doing some more thinking on this over breakfast. And the more I think about it, the more I realize that dithering must be a part of my "final" solution. But there's some interesting information on Wikipedia.

They present this pseudocode as a way to implement dithering:

for each y from top to bottom do
    for each x from left to right do
        oldpixel := pixels[x][y]
        newpixel := find_closest_palette_color(oldpixel)
        pixels[x][y] := newpixel
        quant_error := oldpixel - newpixel
        pixels[x + 1][y    ] := pixels[x + 1][y    ] + quant_error × 7 / 16
        pixels[x - 1][y + 1] := pixels[x - 1][y + 1] + quant_error × 3 / 16
        pixels[x    ][y + 1] := pixels[x    ][y + 1] + quant_error × 5 / 16
        pixels[x + 1][y + 1] := pixels[x + 1][y + 1] + quant_error × 1 / 16
Enter fullscreen mode Exit fullscreen mode

But then, at the bottom of the article (en.wikipedia.org/wiki/Floyd%E2%80%...), they state:

The find_closest_palette_color is nontrivial for a palette that is not evenly distributed. In such a case, a nearest neighbor search in 3D is required.

So I guess that's what I was kinda getting at when I said that dithering is an important part of the equation - but it's also kinda answering a different question. Because even if you're applying dithering, the whole question of how you find the "closest" color in the palette is, as Wikipedia states, "nontrivial". You basically need to solve the problem of finding the "closest" color in the palette before you can start doing effective dithering.

Up to this point in the series, I'm basically chipping away at that problem: How to find the closest color in the palette. Once I've thoroughly illustrated that, then I can demonstrate how to introduce dithering.

In the examples they show in their article, they're basically showing how to dither a 1-bit image - meaning: simple black-or-white. But when you're not delineating between a simple 1 (black) or 0 (white), it gets a bit more involved...

Collapse
 
merri profile image
Vesa Piittinen

You might find this link interesting.

bisqwit.iki.fi/story/howto/dither/jy/

Collapse
 
bytebodger profile image
Adam Nathaniel Davis • Edited

First I just wanna say that this is a great reply and much appreciated. I have some thoughts about some of the specific ideas you've brought up and I think I'll break those into multiple replies. But before I do that, I first wanna acknowledge this:

Color is an endless maze of discovery (and rediscovery) and the rabbit hole runs deep. You can spend years working on color and still never discover the end.

Yes! So much... yes! Honestly, that's basically the underlying theme of this whole series. Sure, I've built a little app that suits my painting needs. And along the way I've learned some really cool stuff about color. But when I started the whole "journey", I didn't think it would be a journey at all. I thought it would be some quick little throwaway app that I'd build in a few nights. But instead, it's been something that I've been chipping away at - over years! And I don't pretend for a moment that I'm any kinda expert on color, even now.

It probably doesn't come across properly in this series. But I didn't just start writing the series as a way to brag about some basic tool I built. I'm really writing it more as an overall general statement that: Color is surprisingly - even, shockingly - complex. Much more so than I ever imagined originally.

As a programmer - who's not a "designer" and has never been much of an "artist" - color felt to me like... just a set of numbers. And numbers feel... easy. But once you start to realize that two sets of numbers that are, mathematically speaking, very close to each other, may in fact look very different to the human eye - it starts to become that rabbit hole that you're talking about.

Collapse
 
cubiclesocial profile image
cubiclesocial

I've been down the color space rabbit hole several times. You might find some bits of this blog post I wrote back in 2019 useful in your journey:

cubicspot.blogspot.com/2019/05/des...

Working, functional source code is always a plus.

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

convert to a minimum of HSB/HSV

In the next article (that I published this afternoon), I show doing color matching based on HSL. In my mind, HSV and HSL really feel like almost the same thing? But if you can tell me that HSL is somehow inferior to HSV, I'm all ears...

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

you never really want to work in RGB. You'll wind up in the wrong place really quickly as you alluded to at the end of the article. RGB is convenient for efficient use on active displays (CRTs, LCD, OLED) but not much else.

No doubt. But even though I've learned to start weaving other color spaces and "theories" into my approach, I still keep coming back to RGB as a... basis. Not because it's in any way "better". But only because, at the end of the day as a web-based programmer, I'm still probably gonna always be starting with RGB values. And no matter what I do to process those colors further, when I need to throw something back onscreen, I'm probably gonna be reverting back to RGB values.

It's like the imperial measurement system. We can all say it's stupid. And... it IS! But as long as I'm living in the US and having to deal with other US citizens on a daily basis, I'll never be able to fully escape the imperial system. I can wish all day that we'd suddenly drop all this imperial nonsense and just adopt the metric system. But let's face it. That's just not happening.

So just like I have to keep converting things from the imperial system, and back into the imperial system, I'll also need to keep thinking (on some level) in RGB values when it comes to color calculations.

Collapse
 
bytebodger profile image
Adam Nathaniel Davis • Edited

Second, fixing pixelated images for small palettes is a "mostly solved" problem. Dithering is generally the solution used when working with a limited color palette.

This is a valid point. I've messed around with multiple approaches to dithering, but I may be revisiting it soon.

That being said, and with all due respect, I believe that dithering is essentially answering a different question than the one I've been trying to solve in this series. (Although I'm open to learning if I'm misunderstanding things.)

My first question, the one that started me down this journey, was: Given a color X, and a set of reference colors [Y], which color in the [Y] reference set is truly "closest" to X?

Dithering (IMHO), answers a different question: Assuming that we already have a set (palette) of colors [Y], how do we ensure that the eye perceives smooth transitions when we're moving from one color to the next in the image?

To be absolutely clear, I do see that, even if I were to get all of the "best" color matches, the resulting image could benefit from applying dithering. So I'm not discounting at all the need for dithering. (In fact, I'm thinking now about adding user-controlled variables that will allow dithering to be introduced to the finished image.) But dithering is still limited if we can't figure out the best way to match the millions of colors in our source image to the limited set of colors in our palette.

Collapse
 
vipert profile image
ViperT

Image description

This is a result from the pixa.pics app, we have a custom color quanitzation algortihm and here we only have 64 colors, it uses euclidean distance along with some parameters making color distance looking natural...

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

Yeah... it looks pretty similar to mine. FWIW, after writing this article, I also converted the RGB algorithm to use Euclidian distance. (This is explained in the next article - basically, I did it to be consistent with the other algorithms, although the visual differences are usually trivial.)

Also, I'm literally writing the article right now that talks about how to restrict the color depth, although you don't have to wait for that to be published. If you want to see it in action, just go to paintmap.studio, load this image, and then use the "Color Depth" and/or "Minimum Threshold" features to limit the color range to 65.

Collapse
 
vipert profile image
ViperT

Okay, great! The color quanitzation algortihm we use is open-source github.com/pixa-pics/pixa-pics.git... (if you want to take a look at it)

Thread Thread
 
bytebodger profile image
Adam Nathaniel Davis

Oh, cool! I definitely will. All my code is also public at github.com/bytebodger/color-map

Thread Thread
 
bytebodger profile image
Adam Nathaniel Davis

I just took a quick look at it. That is some... weird code.

Thread Thread
 
vipert profile image
ViperT

Yeah, I was impressed by SIMD which makes similar instructions being processed up to 3x times faster on multiple data because it makes use of the CPU's little cache on nearly all operation upon little (<128bits) Typed Arra vectors... and so SIMD made us create SIMDope, which I think is THE fastest non-weassembly, non-webgl color blending library... (I think we can blend around 1M+ colors per second)

Also, then, regarding the color quantization class, we used cluster based on binary reduction, we encode colors within 1, 16, 256, or 4096 possibilities instead of 32bits and that makes them grouped in 1-4096 clusters... because when we at first tried to compare 20K colors with 20K colors, it took long (you can imagine it is a few minutes for 400 millions colors) but when they are classed inside 4096 clusters (that are so fast to classify as we divide R, G, B, and A by bit friendly numbers we then re-assemble into a unique unsigned integer) it is somehow passing from minutes to a few ms (100-300ms) which is something like one thousands times faster than our original algorithm architecture.

That said, I know it should be a bit prettier for human reading but private object's variables and methods make it very great in terms of memory efficiency, we'll nevertheless comment and make it a bit more readable soon! Stay tuned, I was greatly surprised to find someone digging into color computation in pure JS too!

Collapse
 
dsmdavid profile image
dsmdavid

Thanks for the write ups. Should the distance be squared?
[as in distance = root ( (r1-r2) ^2 + (g1-g2) ^2 + (b1-b2) ^2)]
Although the caveats you mentioned will remain 😅

Collapse
 
bytebodger profile image
Adam Nathaniel Davis • Edited

This is the image processed using simple RGB differences:

Image description

And this is the image processed with the root-mean-square approach:

Image description

The differences are extremely subtle.

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

FWIW, I've converted the RGB calculation in my tool to use the root-mean-square calculation. Even though I've found the visual differences to be minimal, it will be more consistent with the rest of the articles in this series.

Thanks!

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

Great question! I did indeed play around a lot with a root-mean-square approach. In fact, I tried about a dozen different ways of using the same RGB values to come up with a more accurate match. Unfortunately, I found that it led to no discernible improvement in the visual accuracy of the processed image.

Collapse
 
tythos profile image
Brian Kirkpatrick

This is a really cool problem! It reminds me of something I put together the other week to compare color interpolation for issue tracker labels:

github.com/Tythos/spectra/

Collapse
 
joelbonetr profile image
JoelBonetR 🥇 • Edited

Disclaimer: Sorry if I'm off because I didn't read the previous posts, if so please tell me 😅.

You can use the canvas API to load an image and read all colours from the array of pixels of that image. Don't really want to create a post on that so I searched the Internet and found some, here's the first result I get: levelup.gitconnected.com/how-to-ex...

Then what I'd do to pixelate it is to create a subset of available colours. You can do that increasing each value by 2:

we know each colour (RGB) has a HEX value between 0 and 255, hence you can do a subset that's a half of that (0, 2, 4, 6 ... 254) for each, or one that's a quarter (0, 4, 8 ... 252) or whatever the scale you want to use.

Then you can match the nearest colour from the real image with the subset, or you can not do the subset and just fake it doing +1/-1 or +2/-2 or... based on some rule sets (if R>B then R = R-1 & B = B+1).

Edit: You can also pick the average of 4 pixels ((R+R+R+R)/4 and so) and blend them on a single colour, hence doing this pixelated, low quality image.

Best regards

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

This was covered in the previous two articles...

Collapse
 
joelbonetr profile image
JoelBonetR 🥇

See, I should've read those before commenting 😂😂

Collapse
 
barrricade profile image
Barricade