[edit] Please see comments on the bottom for how to improve the accuracy of the filter values.

I was working on a project to produce a set of SVG filters that could reproduce color blindness (more formally: Color Vision Deficiency, aka CVD) . I got stuck on some color mixing math quirks I didn't understand and as happens, lost interest and set it aside. As it turns out the Chrome Devtools team was working on something very similar. With some new inspiration I picked it up again and got dragged down in the murky world of how exactly the browser does color transforms.

# The goal

The goal is simple, generate some SVG filters for the common types of colorblindness.

# Color blindness

The human eye's receptors are made of cones and rods. Rods sense light/dark. There are 3 types of cones that sense color for long, medium and short wave-lengths of light corresponding roughly to red, green and blue.

Types of color blindness correspond to which receptor is having issues:

- Protanopia: Long wave (red) cones don't work or are missing.
- Protanomaly: Long wave cones partially work.
- Deuteranopia: Medium wave (green) cones don't work or are missing
- Deuteranomaly: Medium wave cones partially work.
- Tritanopia: Short wave (blue) cones don't work or are missing
- Tritanomaly: Short wave cones partially work
- Achromatopsia: None or only 1 type of cone works.

Technically it's also possible for a human to have more than 3 types of cones so we're all color-blind in some sense.

# SVG filters

SVG filters are fairly straightforward. There are many operations to pick from but the one we are interested in is `feColorMatrix`

. This allows us to use a 5x4 matrix to transform colors.

# Where do I get this data?

Chrome Devtools Team's post addresses this fairly succinctly, they found a source that already has ready-made matrices (I didn't find this on my attempt): https://www.inf.ufrgs.br/~oliveira/pubs_files/CVD_Simulation/CVD_Simulation.html

I found other data though https://arxiv.org/pdf/1711.10662.pdf. This gives the matrices for protanopia and deuteranopia in LMS space. I found a tritanopia transform here: https://online-journals.org/index.php/i-jim/article/download/8160/5068. These are based on a technique know as "Daltonization" which is a rather interesting process by which color vision deficiency is simulated, then an error from the original is calculated and the colors are re-adjusted to improve contrast for colorblind users. There are a couple other ways to go about it:

- This one goes through XYZ space: http://mkweb.bcgsc.ca/colorblind/math.mhtml
- This has some alternate means to calculate CVD: https://ixora.io/projects/colorblindness/color-blindness-simulation-research/

One things that's similar in all of them is the use of LMS color space. LMS stands for "long", "medium", "short" which corresponds to the 3 types of cones in the eye. So to get our new colors we first convert the original color into LMS color space. Then we apply the matrix for the type of colorblindness. Once we have the perceived values we convert them back to RGB for display.

## RGB to LMS conversion

```
const rgbToLms = [
[17.8824, 43.5161, 4.1193, 0],
[3.4557, 27.1554, 3.8671, 0],
[0.02996, 0.18431, 1.4700, 0],
[0, 0, 0, 1]
];
```

If you are already familiar with the matrix math you might be able to tell intuitively from this that humans see much more on the green/red side, both L and M receptors pick up a lot of green light.

To convert back we use the inverse of the matrix above:

```
const lmsToRgb = [
[0.0809, -0.1305, 0.1167, 0],
[-0.0102, 0.0540, -0.1136, 0],
[-0.0003, -0.0041, 0.6932, 0],
[0, 0, 0, 1]
];
```

Matrix inverses are a bit complicated, luckily there are tools to do this.

## Protanopia

```
//In LMS space
const protanopia = [
[0, 2.02344, -2.52581, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1]
];
```

The long (red) component is 0 (position `[0,0]`

= `0`

)/ so we don't see it but we do get some shifting from the other wavelengths.

## Deuteranopia

```
//In LMS space
const deuteranopia = [
[1, 0, 0, 0],
[0.4942, 0, 1.2483, 0],
[0, 0, 1, 0],
[0, 0, 0, 1]
];
```

Like the above, medium (green) component is turned off as noted by the `0`

at `[1,1]`

## Tritanopia

```
//In LMS space
const tritanopia = [
[1, 0, 0, 0],
[0, 1, 0, 0],
[-0.3959, 0.8011, 0, 0],
[0, 0, 0, 1]
];
```

Again, only the small (blue) component changes. We'll look into this later but the blue channel gets a strong effect from the green channel.

# Implementation

## GLSL

In my first attempt I tried it as a GLSL shader as shaders are very powerful and hardware accelerated ways to do pixel operations. I'm not going to cover the WebGL boilerplate in this post as it is a whole topic unto itself but the shader code should be simple enough to intuit even if you've never used it:

```
precision highp float;
mat4 rgbToLms = mat4(
17.8824, 43.5161, 4.1193, 0,
3.4557 , 27.1554, 3.8671, 0,
0.02996, 0.18431, 1.4700, 0,
0 , 0 , 0 , 1);
mat4 protanopia = mat4(
0 , 2.02344, -2.52581, 0,
0 , 1 , 0 , 0,
0 , 0 , 1 , 0,
0 , 0 , 0 , 1);
mat4 lmsToRgb = mat4(
0.0809 , -0.1305, 0.1167 , 0,
-0.0102, 0.0540 , -0.1136, 0,
-0.0003, -0.0041, 0.6932 , 0,
0 , 0 , 0 , 1);
void main() {
vec4 source = vec4(1.0, 0.0, 0.0, 1.0);
vec4 lms = source * rgbToLms;
vec4 lmsTarget = lms * protanopia;
vec4 target = lmsTarget * lmsToRgb;
gl_FragColor = target;
}
```

All you need to know is that I'm making a few 4x4 matrices based off the steps above and then multiplying them with my color vector `source`

. The `source`

vector `vec4(1.0, 0.0, 0.0, 1.0)`

is pure red. You can think of `gl_FragColor =`

as sort of a return statement as that's the color that will ultimately be rendered to the screen.

The Test:

## Color formats

There's many different color formats. Aside from the LMS mentioned above, we care about RGB as that's typically how computers handle color. However, even then there's more than one way to represent it. You might be most familiar with the byte representation where each component is in a range `0-255`

and this is used in hex codes (eg `#FF0000`

is 255, 0, 0 in a 24-bit format). For calculation it's often more useful to have it as a floating point value where each component varies from `0.0-1.0`

. So for the vec4 `source`

above we represent red as `1.0, 0.0, 0.0 1.0`

in RGB order. There's also that extra 4th component which is alpha. Alpha is a special value, often used for transparency but can vary based on the blending mode. We don't worry about it, so it's always set to to the default `1.0`

.

## SVG implementation?

```
<filter id="protanopia-bad">
<feColorMatrix values="
17.8824, 43.5161, 4.1193, 0, 0,
3.4557, 27.1554, 3.8671, 0, 0,
0.02996, 0.18431, 1.4700, 0, 0,
0, 0, 0, 1, 0" />
<feColorMatrix values="
0, 2.02344, -2.52581, 0, 0,
0, 1, 0, 0, 0,
0, 0, 1, 0, 0,
0, 0, 0, 1, 0" />
<feColorMatrix values="
0.0809, -0.1305, 0.1167, 0, 0,
-0.0102, 0.0540, -0.1136, 0, 0,
-0.0003, -0.0041, 0.6932, 0, 0,
0, 0, 0, 1, 0" />
</filter>
```

So where I originally got stuck is that I naively thought I could do the same by chaining SVG `feColorMatrix`

filters together. Turns out you get a wildly different result.

I think (but couldn't prove) this is because color values are clamped between filter applications. Sometimes when we do these matrix multiplications we get values that are out of range. This is easy to see as the RGB to LMS matrix has values greater than 1 so they can easily go out of range. Similarly, though not in this case, values can be less than 0. These values are weird because they mean we have a color that cannot be represented in our colorspace. Typically when a graphics API encounters these sorts of values they are clamped, meaning they take the maximum or minimum allowed values.

My speculation is that if colors are clamped in between steps, then we'll be losing a lot of information and that's why the result is mostly blues as they have much smaller co-efficents.

## JS implementation

I couldn't figure out what's going on because both SVG filters and GLSL operations are opaque, there's no way to `console.log`

the intermediate steps. This also matters because vector-matrix multiplication isn't actually a well defined operation. When you multiply you can do it a couple ways, but the most common is "component-wise" multiplication. This means for a matrix and vector:

```
const color = [1, 0, 0, 1]; //red
const rgbToLms = [
[17.8824, 43.5161, 4.1193, 0],
[3.4557, 27.1554, 3.8671, 0],
[0.02996, 0.18431, 1.4700, 0],
[0, 0, 0, 1]
];
const lmsToRgb = [
[0.0809, -0.1305, 0.1167, 0],
[-0.0102, 0.0540, -0.1136, 0],
[-0.0003, -0.0041, 0.6932, 0],
[0, 0, 0, 1]
];
const result = multiply(multiply(multiply(color, rgbToLms), protanopia), lmsToRgb);
```

The operation `multiply`

could look like this:

```
function multiplyByRows(vector, matrix) {
return [
vector[0] * matrix[0][0] + vector[1] * matrix[1][0] + vector[2] * matrix[2][0] + vector[3] * matrix[3][0],
vector[0] * matrix[0][1] + vector[1] * matrix[1][1] + vector[2] * matrix[2][1] + vector[3] * matrix[3][1],
vector[0] * matrix[0][2] + vector[1] * matrix[1][2] + vector[2] * matrix[2][2] + vector[3] * matrix[3][2],
vector[0] * matrix[0][3] + vector[1] * matrix[1][3] + vector[2] * matrix[2][3] + vector[3] * matrix[3][3]
];
}
```

or this:

```
function multiplyByCols(vector, matrix){
return [
vector[0] * matrix[0][0] + vector[1] * matrix[0][1] + vector[2] * matrix[0][2] + vector[3] * matrix[0][3],
vector[0] * matrix[1][0] + vector[1] * matrix[1][1] + vector[2] * matrix[1][2] + vector[3] * matrix[1][3],
vector[0] * matrix[2][0] + vector[1] * matrix[2][1] + vector[2] * matrix[2][2] + vector[3] * matrix[2][3],
vector[0] * matrix[3][0] + vector[1] * matrix[3][1] + vector[2] * matrix[3][2] + vector[3] * matrix[3][3],
];
}
```

That's probably a tad hard to read, but the gist is that you can pair up each vector component to a corresponding component in each *row* or *column* and then add the results to get the new value of each component.

Which to choose? It's implementation dependent and based on how we setup the matrix, so to be consistent, I used the definition for GLSL. It apparently varies depending on if it's multiplied from the right or the left. The latter (`multiplyByCols`

) is correct as we are multiplying it from the left in the shader code. As the documentation notes we could also transpose the matrix (flip it on it's side) and then do the multiplication with `multiplyByRows`

if we wanted.

It might help to have a little more intuition for why this is the case. To keep the color the same we want an "identity" matrix, a matrix with 1s down the diagonal:

```
const identity = [
1, 0, 0, 0
0, 1, 0, 0
0, 0, 1, 0
0, 0, 0, 1];
```

This will give us back our original color. However as we start mixing each column represents the amount of each component we want to mix (col 0,1,2,3 => R,G,B,A) and when summed, each row is the value of the new component. So for instance:

```
const example = [
1, 0, 1, 0,
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1
]
```

The new "red" (or "L" if we're in LMS space) will be a mixture of the R and G channels from the input, the rest will remain the same. The number 0.0-1.0 represents how much gets mixed in, 0% up to 100% (>1.0 and <0.0 are also possible as we've seen in the LMS conversion matrix).

For the current example of simulating pure red in protanopia we get the result `[0.112, 0.1126, 0.0045, 1]`

but was that correct?

## Back to GLSL

I know the GLSL implementation is correct because I compared it's output to some other implementations, but colors aren't really possible to identify precisely like that. What I had to do was build a GLSL calculator. Basically, it constructs a minimal WebGL program to run a fragment shader and then reads out the pixel from the canvas. I'll spare most of the details but I did run into an issue that stumped me for a long time, in order to read pixels you use the method `readPixels`

on the WebGL instance:

```
const array = new Uint8Array(width * height * 4);
gl.readPixels(0, 0, width, height, gl.RGBA, gl.UNSIGNED_BYTE, array);
```

It's an ugly API because you have to pass in the array but what's worse is that you have a choice of formats (5th parameter) and data types (6th parameter). What I wanted was `gl.FLOAT`

for a type and the documentation makes it seem like this is possible but you'll get a warning (not even an error) if you try. It turns out that RGBA + UNSIGNED_BYTE is the only combination that's guaranteed to be supported, browsers can choose to support the others and it's actually even context dependent. Gross. Takeaway: Don't ever try to use anything else.

Luckily, it's easy to convert. Just divide each element by 255:

```
const floatArray = [...array].map(x => x / 255)
```

Here's the finished call:

```
const calc = new GlslCalc();
const redProtanoptaGl = calc.runFragmentShader(`
precision highp float;
mat4 rgbToLms = mat4(
17.8824, 43.5161, 4.1193, 0,
3.4557 , 27.1554, 3.8671, 0,
0.02996, 0.18431, 1.4700, 0,
0 , 0 , 0 , 1);
mat4 protanopia = mat4(
0 , 2.02344, -2.52581, 0,
0 , 1 , 0 , 0,
0 , 0 , 1 , 0,
0 , 0 , 0 , 1);
mat4 lmsToRgb = mat4(
0.0809 , -0.1305, 0.1167 , 0,
-0.0102, 0.0540 , -0.1136, 0,
-0.0003, -0.0041, 0.6932 , 0,
0 , 0 , 0 , 1);
void main() {
vec4 source = vec4(1.0, 0.0, 0.0, 1.0);
vec4 lms = source * rgbToLms;
vec4 lmsTarget = lms * protanopia;
vec4 target = lmsTarget * lmsToRgb;
gl_FragColor = target;
}
`);
```

To more easily compare I print out the values:

```
console.log("%c ", printColor(redProtanoptaGl), "redRgbProtanopia (GLSL)", redProtanoptaGl);
```

`printColor`

is a little utility function to show the color using the relatively underutilized way of styling console logs:

```
function printColor(color){
return `background-color: rgba(${color[0]*255}, ${color[1]*255}, ${color[2]*255}, ${color[3]}); padding: 8px;`;
}
```

If the first argument contains `%c`

, then the second argument is interpreted as a CSS string. It's weird, not all CSS properties are supported, and doesn't work for multiple arguments but it allows us to print colors.

I did the same for the JS implementation and now we can compare:

Looking good! But there's definitely a difference in precision (GLSL is more accurate). Still, it's not enough to matter, these are getting the same answer.

## Simplify the matrices

So now I have something functionally equivalent but I can actually inspect the intermediate steps using a JS implementation and comparing the outputs. My assumption was that intermediate clamping was the reason why SVG filter chaining didn't work. However, it seems like I could just combine the matrices on the right-hand side into a single matrix and that should deal with it. Using the JS implementation I can just take the result of the 3 matrices multiplied together (matrix multiplication isn't commutative so make sure they're in the right order!).

```
//acquired from: multiplyMatrix(lmsToRgb, multiplyMatrix(protanopia, rgbToLms)). Truncated at 4 decimals.
const protanopiaRgb = [
[0.1121, 0.8853, -0.0005, 0],
[0.1127, 0.8897, -0.0001, 0],
[0.0045, 0.0000, 1.0019, 0],
[0, 0, 0, 1]
];
const deuteranopiaRgb = [
[0.2920, 0.7054, -0.0003, 0],
[0.2934, 0.7089, 0.0000, 0],
[-0.02098, 0.02559, 1.0019, 0],
[0, 0, 0, 1, 0]
];
//⚠see discussion below
const tritanopiaRgb = [
[0.4926, 0.5049, -0.0002, 0]
[0.4940, 0.5084, 0.0001, 0],
[-3.0081, 3.0131, 0.9999, 0],
[0, 0, 0, 1]
];
```

If you haven't done or forgot how to multiply matrices (raises hand) here's the process: https://www.mathsisfun.com/algebra/matrix-multiplying.html

# Back to SVG

So now we have the single matrix for protanopia, let's apply it:

Nope, not quite. That's weird though, we literally have two implementations that show this is correct. I even double checked the algorithm for matrix multiplication in the filter: https://developer.mozilla.org/en-US/docs/Web/SVG/Element/feColorMatrix. It should be the same. What gives?

To inspect further I built an SVG calculator in a similar vein to the GLSL calculator. It applies a matrix filter and then reads the color out of the canvas. The result:

I still didn't understand what I was looking at so I tried a simpler matrix:

```
const halfRed = [
0.5, 0, 0, 0, 0,
0, 0, 0, 0, 0,
0, 0, 0, 0, 0 ,
0, 0, 0, 1, 0];
```

When we apply this to red `1.0, 0.0, 0.0, 1.0`

we get:

Huh? We're only taking 0.5 of the red channel so this value should be `0.5`

(128). Does that mean it's broken? I tried again in Firefox and same result. This took some time to figure out.

So remember when I said that images are in RGB color space? Well there's actually more than one RGB color space. When we do color mixing, it's done in linear RGB space. My guess is that's because it's easier to work in a linear space. The problem is (again) the human eye. We don't see in a linear color space, we actually distinguish more dark colors than we do light colors, so if the color space is linearly distributed it'll look washed out. So when your monitor displays things it's typically doing that conversion from linear RGB to sRGB (The "s" stands for "standard"). What happened is that the color mixing happened in linear RGB space, ~~but we really wanted it in sRGB space (my understanding is that means our source image was sRGB but interpreted as linear). What it amounts to is that we need a conversion from sRGB to linear RGB at the end that might look something like:~~

[edit] In fact using linear RGB-space was correct. However you might need to manually add `linearRGB`

to the `interpolation-filers`

element as a Chromium bug caused it to use sRGB by default.

```
//color format for red: [1.0, 0.0, 0.0, 1.0]
function sRgbToLinearRgb(color){
return [...color.slice(0, 3).map(x => x ** 2.2), color[3]);
}
```

However, SVG filters have a property that you can use to say "actually, this should be done in sRGB space", `color-interpolation-filters="sRGB"`

on the `<filter>`

(https://developer.mozilla.org/en-US/docs/Web/SVG/Attribute/color-interpolation-filters) and we'll use that instead of doing our own conversion.

So now we can construct the filters:

# Something is very wrong

You might not be able to tell but having seen a bunch of these images in my research for this project the tritanopia filter looks wrong. I checked several different sources, and they all have the same matrix and it just doesn't work, I don't know why.

```
//In LMS space and also doesn't work
const tritanopia = [
[1, 0, 0, 0],
[0, 1, 0, 0],
[-0.3959, 0.8011, 0, 0],
[0, 0, 0, 1]
];
```

Using some pure visual estimation I get something closer to:

```
const tritanopiaFixed = [
[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0.05, 0, 0],
[0, 0, 0, 1]
];
const tritanopiaRgbFixed = [
[1.01595, 0.1351, -0.1488, 0],
[-0.01542, 0.8683, 0.1448, 0],
[0.1002, 0.8168, 0.1169, 0],
[0, 0, 0, 1]
];
```

I'm not a color vision researcher or anything though, so I don't know what I'm actually missing here. If you do please tell me!

# Achromatopsia

I didn't really talk about this much but if you have 2 or more types of rods that don't work then you can only perceive brightness. This transform is easy because you can find it everywhere, it's the same transform that's used to calculate luminance (or is it luminosity? luma? I feel these terms are used interchangeably but probably have more specific meanings), and we can apply this directly to the RGB values:

```
//in RGB space
const achromatopsia = [
0.21, 0.72, 0.07, 0,
0.21, 0.72, 0.07, 0,
0.21, 0.72, 0.07, 0,
0, 0, 0, 1];
```

# Can we tell if it works?

If we knew someone with a particular color-blindness then we could ask them if the original and simulated image look the same. I don't know anybody, but another way is to test using a color blindness test. You know, the dots that show a number? It should be unreadable with the appropriate filters on:

The left is the normal image, the right is with the protanopia filter on. I'd say this is good enough.

# Conclusion

We've successfully implemented simulation filters for 4 types of colorblindness in SVG, JS and GLSL. While there's a lot to be said about how accurate the models are, it shouldn't be wildly off base. There's a lot more about CVD research that I'm just learning about so there's likely more ways to improve the model. In fact, the demo page I put together also shows the same models from the paper cited in the Google blog post for comparison.

You can find the code and demo here:

Code: https://github.com/ndesmic/cvd-sim

Demo: http://ndesmic.github.io/cvd-sim

## Top comments (3)

Hi! Someone told me about your post (nicely detailed!) and I have a couple comments:

1) These matrix filters are actually all meant to be applied in the linearRGB space. Unfortunately many opensource implementations got this wrong, but you cannot convert from sRGB to LMS with a single matrix multiplication. So the color-interpolation-filters setting should actually be "linearRGB" and not "sRGB". As an easy way to check whether it's correct, the pure red conversion for protanopia becomes way too dark without it.

2) For tritanopia there is no single matrix that will just work. The only approach I know that works reasonably well is the Brettel, Viénot & Mollon paper from 1997

Computerized simulation of color appearance for dichromats. It is not much more complex (two matrix multiplications and a dot product to check which one), but I'm not sure that it can still be implemented as an SVG filter.One good reference to see how simulated images should look like is Vischeck, but unfortunately the online simulator does not work anymore. However that same code made it into GIMP as a display filter, so it's easy to test.

Another good reference is the Color Blindness Simulation Research post from ixora.io.

Otherwise I recently wrote a Review of Open Source Color Blindness Simulations that discusses the various approaches and also this more detailed article Understanding LMS-based Color Blindness Simulations

that includes a discussion about sRGB vs linearRGB.

Thanks so much for posting this! This is really great information and I'm really happy to receive feedback about how I can improve.

glad if I can help! Btw it turns out that the Chrome dev tools had the same issue with sRGB, which was wrongly the default for color-interpolation-filters in the Blink renderer. They now fixed it: bugs.chromium.org/p/chromium/issue... .