The background for this series comes from the fact that, in my spare time, painting is one of my hobbies. I never did any kinda painting for most of my life. But about eight years ago, I got a "bug" to start playing around with some paints. I thought that bug was gonna be nothing more than a quick little diversion. But over the years, I've found that there are many quirky features of color/image management that most devs simply aren't exposed to on a regular basis.
Before I get into the meat of the article, I'll share a few links with you:
Those are in-progress pics of my past-and-current paintings. You'll notice that my style is quite different from most painters. I use heavy body acrylics. And I squeeze all of the paint out of bottles onto the canvas/panel. It provides an extreme impasto effect.
This is the brand new app/site - Paint Map Studio - that I just launched. I've been wanting to write these articles for quite a while. But I always held off because the software I'd written only existed on my local machine. Now it's on a publicly-available website for anyone to play with.
That site allows you to load an image file and then map the colors that exist in that file to a given palette of paint colors. It also gives you a lot of levers to adjust to determine exactly how the colors in the image will be mapped to the paints.
OK, on to the subject of this article...
When I first got the idea to start painting, I had this vision of creating pixel art - on a canvas. So the first thing I did was to go out and buy a bunch of different paints. I mean... a lot of paints.
I bought all those different paints because I don't know squat about color-mixing and I had no confidence that I could look at an image on my screen and then figure out exactly how to blend a small set of "base" colors to match what I was seeing. So instead, I figured that I'd just buy ALL THE COLORS!!! And then... it would be easy. Right???
Also, because I'm a certified nerd, I figured that it'd be easy to write a program that would match the colors in the target image to the closest-possible colors amongst all the paints that I'd just purchased. In hindsight, this was far more difficult than I'd imagined.
(BTW - the image above is an actual picture of a bunch of my paints piled up on the floor. It's not even all of my paints. It's just a representative sampling. There are many more tubes/jars of paint sitting on my shelves.)
Before I could set about writing my "easy" new program, I first needed some way to compare the digital colors that existed in my subject image to the in-real-life colors that I now had in all those little tubes of paint.
This felt like a trivial task to me. Because there are loads of ways to capture images of items in meatspace and then translate them to my web-based environment. However, I eventually learned that capturing the "true" color of items in everyday life can be a bit more challenging than it first seems.
When painters want to see the color of a paint swatch, it's common to do something like the image above. They take a dab of the paint, smear it across some (presumably white) surface, and wait for it to dry. And that supposedly shows you the "color" of that paint.
But take a close look at that image. Is there really one specific shade of purple in that image? Of course not. In fact, once you take a digital scan of your little paint smear, you end up with potentially hundreds of slightly different, yet distinct colors. Some parts of the smear are a fairly rich hue of purple. Other parts are far lighter. So which exact pixel should you grab out of that image to determine the "real" color of purple that comes in that particular tube of paint??
Of course, you can mitigate these effects by making your smear far thicker. The idea is that you don't spread it so thin. And once the whole paint smear is of a uniform thickness, it should be easier to grab the "real" color of the paint. Right???
(It should also be noted that the many different hues that can come from any single color of paint are really not a "bug". They're a feature. Skilled artists know how to use this feature to get the exact colors they desire in their finished pieces. Unfortunately, I'm not one of those skilled artists.)
To fix the problem with thin paint smears, I made a bunch of test blobs for every different color of paint that I owned. I tried to make them nice and thick, so I wasn't seeing the pastel effect of thinly-scraped paint on a white surface. The image above shows one of these test sheets.
This approach did help me to mitigate the problem. But it was far from perfect. If you look closely at each one of those little blobs, you'll still notice a variety of hues on each one. Much of this comes from the fact that the paint is shiny. So when you try to capture them digitally, you get much lighter colors on the shiny portions, and much darker colors on the not-so-shiny portions.
Still, these blobs did allow me to get a baseline color measurement for every different paint in my inventory. Originally I scanned these sheets and then used a color picker to find the color in the darkest, least-shiny bits of each blob.
This approach... worked. Kinda. Sorta. It definitely allowed me to dive into the other aspects of my color-matching program. But after I'd done some painting, I found that my real-life colors were often a bit "off" from what I had scanned.
These inaccuracies were crucial. Because if you're trying to write a color-matching program, you need to know that your reference colors are accurate. Otherwise, the best color-matching program in the world will still deliver some occasionally-wonky results.
So if I was creating nice thick blobs, and then grabbing the digital colors directly from the richest, darkest bits of the scanned image, why was I still finding inaccuracies in my program? Part of the problem lies in the way that we perceive color - and the way those perceptions change due to all the environmental factors of ambient lighting.
As a programmer, you may be under the impression that color is a "static" thing. Specifically, you may believe that objects have inherent colors. And that those objects are always the same color - no matter what environment they may be in.
This is a natural conclusion for techies. Because, in a digital world, colors are static. For example, consider the following color:
Unless you're color blind, or unless your monitor is seriously jacked up, I'm assuming that you'll look at that square and immediately identify it as "red". And it is indeed red.
Furthermore, it will look red - and it will look to be the exact same shade of red - no matter where you view it. If you move your phone/laptop outside and look at it again in broad daylight, it will look exactly the same. If you move your phone/laptop into an incredibly dark room and then look at it again, it will still be red. In fact, it will still be the exact same shade of red. Anywhere that you view it.
But this feature of "color permanence" arises because, on our electronic devices, the red box you see above is a SOURCE of light. No matter what environment you're in, the color of that box above looks the same because the box isn't reflecting red wavelengths. It's emitting red wavelengths. So the box always looks exactly the same, no matter where you view it.
The "problem" here is that the red box above is a light source. So as long as that light source is emitting red light, you'll always perceive the box to be that exact same shade of red. But in the real world, most of the objects that we view are not emitting wavelengths of light. They're reflecting them.
To put it another way, colors of light work in an additive fashion. But colors of real-world objects work in a subtractive fashion. We'll hit this topic again in future articles in this series. But for now, the easiest way to think about it is that, in an additive model, if you add all the colors together, you get white. But in a subtractive model, if you add all the colors together, you get the exact opposite. You get black.
You can demonstrate this in a digital realm by looking at RGB values. White is rendered by mixing all of the red, with all of the green, with all of the blue. For example, in CSS white is represented as
rgb(255, 255, 255), where
255 is the maximum amount of each color.
But if you're working with real-world colors (e.g., paints), then mixing all of the red, with all of the green, with all of the blue will yield... black. So clearly, colors that emanate from light sources operate in a way that's quite different from colors that are reflected by other light sources.
Most people (at least, in the US) will immediately conjure up an image - and a color - when you say "Solo cup". The iconic party cups are known for being red. And if you've been to a kegger, or played a game of beer pong, you probably know all about the exact color of a Solo cup. So I took several different pics of a Solo cup for this article.
The image above was taken under a bright light. Specifically, it was a bright white light. Now of course, the image is not a single featureless blob of red. There are features on the cup that reflect many different shades of red. But for the most part, we could put our color picker right in the meaty, non-ridged portion of that cup and get an RGB value that reflects some variation of red.
The image above is the exact same cup. Unmoved. The only difference is that I photographed it under a bright yellow light. (Or to put it another way, I photographed it under much "warmer" light.) Notice how the same portion of the cup now feels a bit more orange.
And finally, the image above was taken under much dimmer light.
Now, colloquially speaking, we could easily look at all three images and simply say, "The cup is red. In all three images. So what's the big deal?" And for government work, they're all probably "close enough" that no one would care. But they're definitely not identical.
To make this clearer, I used my color picker to grab the exact shade of red from the same spot on the cup in all three pics. These are the values that I got:
- Red 199, Green 22, Blue 12
- Red 254, Green 44, Blue 19
- Red 208, Green 2, Blue 15
Sure, they're all fairly close. But they're definitely not identical. To make it even more obvious, the following image shows those three shades of red - shades that were all captured from the exact same item - side-by-side:
Such distinctions may sometimes be trivial. But if you're trying to capture a color from, say, a piece of marketing swag, there's a good chance that the exact RGB value you settle upon will not perfectly match the "official" RGB values from the company's marketing team. And we all know how picky brand managers can be about the EXACT shade of any given color when it comes to corporate media.
We already know that grabbing a single color from a pic of a real-world object can be problematic because it can be challenging to decide on the exact pixel in the image to use. But as I demonstrated in the previous section, the amount and the type of light can also wreak havoc on color captures.
Extremely low light doesn't just make something darker. It literally changes the colors that we perceive on the object as the environment grows dimmer. Reds will sink into purples. Yellows will sink into browns. Eventually, if it gets dark enough, the "color" that we perceive from any object will be... black.
You may think the answer is simply to capture images under extremely bright light. But this can also throw your color captures off. As the light source grows more intense, objects take on hues that are more pastel in nature.
Remember I said that the color captures from my paint blobs were still a bit off? The reason was because I was taking the sheets of paper that held the blobs - and scanning them. I thought this was a sound idea because the scanner would eliminate many of the issues with ambient lighting. But the scanner created its own problem.
Scanners/printers/copiers work by shining an intense light on the target sheet of paper. It's so intense, in fact, that you can see the light shining out even after you've closed the lid. While that intense light is ideal for highlighting areas of high contrast (e.g., when you have a bunch of black text on a white sheet of paper), it can cause headaches when you're trying to get an accurate measure of the exact colors on the page.
The type of light also affects the colors we can capture from an image. Incandescent light is incredibly yellow - a feature that can make for great photographs, but is problematic when trying to find "true" colors. LEDs - even "white" LEDs - often have blue-or-violet undertones. Daylight is tough to manage both because it varies from hour-to-hour and minute-to-minute, and also because it carries its own yellow undertones.
After all the "gotchas" discussed above, you may be hoping that I can wrap this article up with a perfect solution. Something that allows us to extract the "real" colors of objects in physical space. Alas, there is no perfect solution.
As I've already shown you, it can be a bit of a fool's errand to break your neck trying to extract the One True Color from something represented in a digital image. Because that One True Color varies constantly based upon the amount of lighting, the type of lighting, and where on the image you decide to capture the color from.
There are, however, a few basic tips you can follow:
Strive for white lighting. The closer your light source is to true white, the more likely that you're grabbing an RGB value from the image that is perceived to be the "true" color of the object.
Try to grab color captures from the most homogenous part of the subject. Ridges, folds, bumps, etc., will all throw off your attempt to grab the object's color.
Avoid lighting extremes. You don't want to grab colors from images taken in dim lighting. But you also don't want to grab colors from images where the subject is bathed in intense light.
Obviously, you always want to strive to get official RGB values for your colors whenever possible. The majority of times, you can expect to receive these from the marketing/branding folks. Or even from a print shop.
All that being said, there may still be times when you need to grab a color from an image of a real-world object. (Like... when you're trying to match colors in a digital image to those founds in real-world paints.) When this happens, understand that there is no perfect way to grab the object's "true" color. All you can do is mitigate the factors that will throw off the perceived color of the object you're trying to capture.
I'll be getting a lot more code-centric in the next installment. I'll show you how to load a digital image so we can begin to manipulate it programmatically. After that, I'll show how to analyze its existing colors, and then average them to create pixelization. Eventually, I'll be showing most of the nitty-gritty logic that went into my Paint Map Studio color-matching tool.