DEV Community

ndesmic
ndesmic

Posted on

Exploring Color Math Through Color Blindness 2: Partial Deficiency

When I last started looking at how to emulate color vision deficiency, I was doing total deficiency. A friendly user popped in and asked a couple questions about how to do partial color vision deficiency. Sometime a cone isn't simply off or on but can be in some sort of in-between state that would be different. How could we emulate that?

As is turns out it's not too hard. If you've read my other posts I've written a bit about linear interpolation or "lerps" before and here is another great use-case for them! What we'd like is to create a system of matrices based on a spectrum of vision deficiencies.

Let's review. Last time we got some data and did some matrix ops, being careful to make sure we're doing multiplication the right way to transform from RGB to the LMS color space (also making sure we avoid sRGB conversions). Then we applied a matrix for 3 types of color vision deficiency protanopia (red cone), deuteranopia (green), and tritanopia (blue). Then we convert back the RGB space to get the final image, which is the image as it would be seen by someone with that issue. We then simplified to combine all those operations into a single matrix which represents RGB to RGB conversions for that type of vision deficiency.

The final matrices:

const protanopiaRgb = [
  [0.1121, 0.8853, -0.0005, 0],
  [0.1127, 0.8897, -0.0001, 0],
  [0.0045, 0.0000, 1.0019, 0],
  [0, 0, 0, 1]
];

const deuteranopiaRgb = [
  [0.2920, 0.7054, -0.0003, 0],
  [0.2934, 0.7089, 0.0000, 0],
  [-0.02098, 0.02559, 1.0019, 0],
  [0, 0, 0, 1, 0]
];

//⚠see notes in previous post
const tritanopiaRgb = [
  [1.01595, 0.1351, -0.1488, 0],
  [-0.01542, 0.8683, 0.1448, 0],
  [0.1002, 0.8168, 0.1169, 0],
  [0, 0, 0, 1]
];
Enter fullscreen mode Exit fullscreen mode

What we can do is linear interpolate from "normal" vision to one of these. So what's "normal" vision in matrix form? It's the identity matrix! A matrix with all 1s down the diagonal.

const normalVision = [
   1, 0, 0, 0,
   0, 1, 0, 0,
   0, 0, 1, 0,
   0, 0, 0, 1
];
Enter fullscreen mode Exit fullscreen mode

So we have the start and end point of our interpolation. How do we interpolate? In this case we're lerping whole matrices instead of vectors, but it actually doesn't make a difference, 3d vector or 4x4 matrix it's the same process.

export function lerp(start, end, t) {
    const result = [];
    for (let row = 0; row < start.length; row++) {
        const newRow = [];
        for (let col = 0; col < start[0].length; col++) {
            newRow.push(start[row][col] + (end[row][col] - start[row][col]) * t)
        }
        result.push(newRow);
    } return result;
}
Enter fullscreen mode Exit fullscreen mode

This is an M x N matrix lerp. Now we have enough to get partial deficiency:

const color = [1,0,0,1];

const protanopia = [
  [0.1120, 0.8853, -0.0005, 0],
  [0.1126, 0.8897, -0.0001, 0],
  [0.0045, 0.0001, 1.00191, 0],
  [0, 0, 0, 1]
];
const normalVision=[
  [1, 0, 0, 0],
  [0, 1, 0, 0],
  [0, 0, 1, 0],
  [0, 0, 0, 1]
];
//0 =< t =< 1
const partial = Matrix.lerp(normalVision, protanopia, t);
const result = Matrix.crossMultiplyMatrixVector(color, partial);
return result;
Enter fullscreen mode Exit fullscreen mode

Where t is the amount of deficiency between 0 (none) and 1 (full).

Screenshot 2021-07-16 230912

This looks about right.

Did we get it right?

It looks good but another question was brought up. Does it matter if we do the lerp for the RGB-space matrix versus doing it in LMS-space? I was about 90% sure it didn't but it never hurts to test our understanding. Let's try it again:

const rgbToLms = [
  [17.8824, 43.5161, 4.1193, 0],
  [3.4557, 27.1554, 3.8671, 0],
  [0.02996, 0.18431, 1.4700, 0],
  [0, 0, 0, 1]
];
const lmsToRgb = [
  [0.0809, -0.1305, 0.1167, 0],
  [-0.0102, 0.0540, -0.1136, 0],
  [-0.0003, -0.0041, 0.6932, 0],
  [0, 0, 0, 1]
];
const protanopia = [
  [0, 2.02344, -2.52581, 0],
  [0, 1, 0, 0],
  [0, 0, 1, 0],
  [0, 0, 0, 1]
];
const normalVision = [
  [1, 0, 0, 0],
  [0, 1, 0, 0],
  [0, 0, 1, 0],
  [0, 0, 0, 1]
];
var result = Matrix.crossMultiplyMatrixVector(
  Matrix.crossMultiplyMatrixVector(
    Matrix.crossMultiplyMatrixVector(color, rgbToLms),
    Matrix.lerp(normalVision, protanopia, 0.0)
  ),
  lmsToRgb
);

return result;
Enter fullscreen mode Exit fullscreen mode

The result:

Screenshot 2021-07-16 231249

That looks the same to me and it makes sense, the RGB transform is just the LMS version with the up/down conversions baked into it so we'd expect it to work the same way. I'm about 99% convinced at this point but maybe it just happens to go through those same point but the "curve" is different? The intuition here is that these are linear transforms so it shouldn't matter either but it can't hurt to test that too.

Varying by t

We've already covered the math but let's take a look at how the wc-cpu-shader-canvas works:

import * as Matrix from "../lib/matrix.js";

function loadImage(url) {
    return new Promise((res, rej) => {
        const image = new Image();
        image.src = url;
        image.onload = () => res(image);
        image.onerror = rej;
    });
}

export class WcCpuShaderCanvas extends HTMLElement {
    #image;
    #height = 100;
    #width = 100;
    #context;
    #mod;

    static observedAttributes = ["image", "height", "width", "src"];
    constructor() {
        super();
        this.bind(this);
    }
    bind(element) {
        this.render = this.render.bind(element);
        this.update = this.update.bind(element);
    }
    render() {
        this.attachShadow({ mode: "open" });
        this.shadowRoot.innerHTML = `
            <style>
             :host {
                 display: block;
             }
            </style>
            <canvas width="${this.#width}px" height="${this.#height}px"></canvas>
        `;
    }
    connectedCallback() {
        this.render();
        this.cacheDom();
        this.#context = this.dom.canvas.getContext("2d");
        this.update();
    }
    cacheDom() {
        this.dom = {
            canvas: this.shadowRoot.querySelector("canvas")
        };
    }
    attributeChangedCallback(name, oldValue, newValue) {
        if(oldValue !== newValue){
            this[name] = newValue
        }
    }
    update(){
        const program = this.#mod
            ? this.#mod.default
            : this.textContent.trim() !== "" 
                ? new Function(["color", "Matrix"], this.textContent)
                : null;

        if(!program || !this.#context) return;
        this.#context.reset();
        if(this.#image){
            this.#context.drawImage(this.#image, 0, 0);
        }

        const imageData = this.#context.getImageData(0, 0, this.#width, this.#height)
        let i = 0;
        while(i < imageData.data.length){
            const data = imageData.data;
            const pixel = program([
                data[i] / 255,
                data[i + 1] / 255,
                data[i + 2] / 255,
                data[i + 3] / 255,
            ], Matrix);
            data[i] = Math.floor(pixel[0] * 255);
            data[i + 1] = Math.floor(pixel[1] * 255);
            data[i + 2] = Math.floor(pixel[2] * 255);
            data[i + 3] = Math.floor(pixel[3] * 255);
            i += 4;
        }
        this.#context.putImageData(imageData, 0, 0);
    }
    set image(val) {
        loadImage(val)
            .then(img => {
                this.#image = img;
                this.update();
            });
    }
    set src(val){
        import(val)
            .then(mod => {
                this.#mod = mod;
                this.update();
            });
    }
    set height(val) {
        val = parseInt(val);
        this.#height = val;
        if(this.dom){
            this.dom.canvas.height = val;
        }
    }
    set width(val) {
        val = parseInt(val);
        this.#width = val;
        if(this.dom){
            this.dom.canvas.width = val;
        }
    }
}

customElements.define("wc-cpu-shader-canvas", WcCpuShaderCanvas);
Enter fullscreen mode Exit fullscreen mode

It's pretty simple. We have a loadImage utility to load in an image which gets drawn to the canvas. We have some setters for some attributes that are mostly self-explanatory. src works like a script tag, if its present then we'll fetch the script from the source, otherwise use the textContent. This is nice because we can debug external scripts and use modules. If we're using inline script we use the function constructor with the text as the body (a form of eval so it's a bit naughty to use in production). If using a module we dynamically import it and call the default function. We draw the image and then iterate over each pixel passing it into our function (converting to a float4 with values between 0 and 1 by dividing by 255).

Let's add the concept of a global.

//add to observed attributes!
#globals;
set globals(val){
    val = typeof(val) === "object" ? val : JSON.parse(val);
    this.#globals = val;
    this.update();
}
Enter fullscreen mode Exit fullscreen mode

We'll parse an object out of the globals attribute and update the canvas if it changes. There's a check here incase we programmatically pass it in rather than get it from an attribute. If it's already an object we don't need to parse saving us a lot of overhead. Make sure to add it to observedAttributes! Finally we just update the function to take it as the 3rd parameter.

update(){
    const program = this.#mod
        ? this.#mod.default
        : this.textContent.trim() !== "" 
            ? new Function(["color", "Matrix", "Globals"], this.textContent)
            : null;
    if(!program || !this.#context) return;
    this.#context.reset();
    if(this.#image){
        this.#context.drawImage(this.#image, 0, 0);
    }
    const imageData = this.#context.getImageData(0, 0, this.#width, this.#height);
    let i = 0;
    while(i < imageData.data.length){
        const data = imageData.data;
        const pixel = program([
            data[i] / 255,
            data[i + 1] / 255,
            data[i + 2] / 255,
            data[i + 3] / 255,
        ], Matrix, this.#globals);
        data[i] = Math.floor(pixel[0] * 255);
        data[i + 1] = Math.floor(pixel[1] * 255);
        data[i + 2] = Math.floor(pixel[2] * 255);
        data[i + 3] = Math.floor(pixel[3] * 255);
        i += 4;
    }
    this.#context.putImageData(imageData, 0, 0);
}
Enter fullscreen mode Exit fullscreen mode

We can then create a simple animation loop varying t:

const protanomalyRgbJs = document.querySelector("#protanomaly-rgb-js");
const protanomalyLmsJs = document.querySelector("#protanomaly-lms-js");

let last = performance.now();
function draw() {
    requestAnimationFrame(() => {
        const now = performance.now();
        protanomalyRgbJs.globals = { t: (now % 5000) / 5000 };
        protanomalyLmsJs.globals = { t: (now % 5000) / 5000 };
        draw();
    });
}
draw();
Enter fullscreen mode Exit fullscreen mode

This will change the value of t between 0 and 1 over 5 seconds for both shader canvases.

Animation

Looks the same to me.

GLSL

We can do the same thing for GLSL.

export class WcGpuShaderCanvas extends HTMLElement {
    static observedAttributes = ["image", "height", "width", "colors"];
    #height = 240;
    #width = 320;
    #image;
    #colors;
    #setReady;
    ready = new Promise(res => { this.#setReady = res; });
    constructor() {
        super();
        this.bind(this);
    }
    bind(element) {
        element.attachEvents = element.attachEvents.bind(element);
        element.cacheDom = element.cacheDom.bind(element);
        element.createShadowDom = element.createShadowDom.bind(element);
        element.bootGpu = element.bootGpu.bind(element);
        element.compileShaders = element.compileShaders.bind(element);
        element.attachShaders = element.attachShaders.bind(element);
        element.render = element.render.bind(element);
    }
    async connectedCallback() {
        this.createShadowDom();
        this.cacheDom();
        this.attachEvents();
        await this.bootGpu();
        this.render();
        this.#setReady();
    }
    createShadowDom() {
        this.attachShadow({ mode: "open" });
        this.shadowRoot.innerHTML = `
                <style>
                    :host { display: block; }
                    #message { display: none; }
                </style>
                <canvas width="${this.#width}px" height="${this.#height}px"></canvas>
                <div id="message"></div>
            `;
    }
    cacheDom() {
        this.dom = {};
        this.dom.canvas = this.shadowRoot.querySelector("canvas");
        this.dom.message = this.shadowRoot.querySelector("#message");
    }
    attachEvents() {

    }
    async bootGpu() {
        this.context = this.dom.canvas.getContext("webgl2", { preserveDrawingBuffer: true });
        this.program = this.context.createProgram();
        this.compileShaders();
        this.attachShaders();
        this.context.linkProgram(this.program);
        this.context.useProgram(this.program);
        this.createPositions();
        this.createUvs();
        this.createIndicies();
        this.createColors();
        if(this.#image){
            this.createTexture(await loadImage(this.#image));
        }
    }
    createPositions() {
        const positions = new Float32Array([
            -1.0, -1.0,
            1.0, -1.0,
            1.0, 1.0,
            -1.0, 1.0
        ]);
        const positionBuffer = this.context.createBuffer();
        this.context.bindBuffer(this.context.ARRAY_BUFFER, positionBuffer);
        this.context.bufferData(this.context.ARRAY_BUFFER, positions, this.context.STATIC_DRAW);

        const positionLocation = this.context.getAttribLocation(this.program, "aVertexPosition");
        this.context.enableVertexAttribArray(positionLocation);
        this.context.vertexAttribPointer(positionLocation, 2, this.context.FLOAT, false, 0, 0);
    }
    createColors(){
        const colors = new Float32Array(this.#colors);
        const colorBuffer = this.context.createBuffer();
        this.context.bindBuffer(this.context.ARRAY_BUFFER, colorBuffer);
        this.context.bufferData(this.context.ARRAY_BUFFER, colors, this.context.STATIC_DRAW);

        const colorLocation = this.context.getAttribLocation(this.program, "aVertexColor");
        this.context.enableVertexAttribArray(colorLocation);
        this.context.vertexAttribPointer(colorLocation, 4, this.context.FLOAT, false, 0, 0);
    }
    createUvs() {
        const uvs = new Float32Array([
            0.0, 1.0,
            1.0, 1.0,
            1.0, 0.0,
            0.0, 0.0
        ]);
        const uvBuffer = this.context.createBuffer();
        this.context.bindBuffer(this.context.ARRAY_BUFFER, uvBuffer);
        this.context.bufferData(this.context.ARRAY_BUFFER, uvs, this.context.STATIC_DRAW);

        const texCoordLocation = this.context.getAttribLocation(this.program, "aTextureCoordinate");
        this.context.enableVertexAttribArray(texCoordLocation);
        this.context.vertexAttribPointer(texCoordLocation, 2, this.context.FLOAT, false, 0, 0);
    }
    createIndicies() {
        const indicies = new Uint16Array([
            0, 1, 2,
            0, 2, 3
        ]);
        const indexBuffer = this.context.createBuffer();
        this.context.bindBuffer(this.context.ELEMENT_ARRAY_BUFFER, indexBuffer);
        this.context.bufferData(this.context.ELEMENT_ARRAY_BUFFER, indicies, this.context.STATIC_DRAW);
    }
    createTexture(image) {
        const texture = this.context.createTexture();
        this.context.bindTexture(this.context.TEXTURE_2D, texture);

        this.context.texParameteri(this.context.TEXTURE_2D, this.context.TEXTURE_WRAP_S, this.context.CLAMP_TO_EDGE);
        this.context.texParameteri(this.context.TEXTURE_2D, this.context.TEXTURE_WRAP_T, this.context.CLAMP_TO_EDGE);
        this.context.texParameteri(this.context.TEXTURE_2D, this.context.TEXTURE_MIN_FILTER, this.context.NEAREST);
        this.context.texParameteri(this.context.TEXTURE_2D, this.context.TEXTURE_MAG_FILTER, this.context.NEAREST);

        this.context.texImage2D(this.context.TEXTURE_2D, 0, this.context.RGBA, this.context.RGBA, this.context.UNSIGNED_BYTE, image);
    }
    compileShaders() {
        const vertexShaderText = `
                attribute vec3 aVertexPosition;
                attribute vec2 aTextureCoordinate;
                attribute vec4 aVertexColor;
                varying vec2 vTextureCoordinate;
                varying vec4 vColor;

                void main(){
                    gl_Position = vec4(aVertexPosition, 1.0);
                    vTextureCoordinate = aTextureCoordinate;
                    vColor = aVertexColor;
                }
            `;
        this.vertexShader = this.context.createShader(this.context.VERTEX_SHADER);
        this.context.shaderSource(this.vertexShader, vertexShaderText);
        this.context.compileShader(this.vertexShader);

        if (!this.context.getShaderParameter(this.vertexShader, this.context.COMPILE_STATUS)) {
            this.setMessage(`⚠ Failed to compile vertex shader: ${this.context.getShaderInfoLog(this.vertexShader)}`);
        }

        const fragmentShaderText = this.textContent;
        this.fragmentShader = this.context.createShader(this.context.FRAGMENT_SHADER);
        this.context.shaderSource(this.fragmentShader, fragmentShaderText);
        this.context.compileShader(this.fragmentShader);

        if (!this.context.getShaderParameter(this.fragmentShader, this.context.COMPILE_STATUS)) {
            this.setMessage(`⚠ Failed to compile fragment shader: ${this.context.getShaderInfoLog(this.fragmentShader)}`);
        }
    }
    setMessage(message){
        this.dom.message.textContent = message;
        this.dom.message.style.display = "block";
        this.dom.canvas.style.display = "none";
    }
    unsetMessage(){
        this.dom.message.textContent = "";
        this.dom.message.style.display = "none";
        this.dom.canvas.style.display = "block";
    }
    attachShaders() {
        this.context.attachShader(this.program, this.vertexShader);
        this.context.attachShader(this.program, this.fragmentShader);
    }
    render() {
        this.context.clear(this.context.COLOR_BUFFER_BIT | this.context.DEPTH_BUFFER_BIT);
        this.context.drawElements(this.context.TRIANGLES, 6, this.context.UNSIGNED_SHORT, 0);
    }
    attributeChangedCallback(name, oldValue, newValue) {
        if (newValue !== oldValue) {
            this[name] = newValue;
        }
    }
    set height(value) {
        this.#height = value;
        if(this.dom){
            this.dom.canvas.height = value;
        }
    }
    set width(value) {
        this.#width = value;
        if(this.dom){
            this.dom.canvas.height = value;
        }
    }
    set image(value) {
        this.#image = value;
        loadImage(value)
            .then(img => this.createTexture(img));
    }
    set colors(value){
        this.#colors = value.split(/[,;\s]\s*/g).map(x => parseFloat(x.trim()));
    }
    get pixelData(){
        return this.ready.then(() => {
            const array = new Uint8Array(this.#height * this.#width * 4);
            this.context.readPixels(0, 0, this.#width, this.#height, this.context.RGBA, this.context.UNSIGNED_BYTE, array);
            return [...array].map(x => x / 255);
        });
    }
    //TODO: throw away program on detach
}

customElements.define("wc-gpu-shader-canvas", WcGpuShaderCanvas);
Enter fullscreen mode Exit fullscreen mode

I'm not going to cover how this works as it's nearly identical to https://dev.to/ndesmic/webgl-3d-engine-from-scratch-part-1-drawing-a-colored-quad-2n48. There's also the same image fetching function which I omitted. What we will do is add the ability to have globals just like we did with the CPU shader canvas. I'm aslo omitting the boilerplate to setup the attribute as it's the exact same.

What we do need to change is taking the globals and binding them as uniforms:

createUniforms(){
        if(!this.#globals) return;
    Object.entries(this.#globals).forEach(([key, val]) => {
        const location = this.context.getUniformLocation(this.program, key);
        if(!location) return;
        if(Array.isArray(val)){
            switch(val.length){
                case 1: {
                    this.context.uniform1fv(location, val);
                }
                case 2: {
                    this.context.uniform2fv(location, val);
                }
                case 3: {
                    this.context.uniform3fv(location, val);
                }
                case 4: {
                    this.context.uniform4fv(location, val);
                }
                default: {
                    console.error(`Invalid dimension for binding uniforms. ${key} with value of length ${val.length}`);
                }
            }
        } else {
            this.context.uniform1f(location, val);
        }
    });
}
Enter fullscreen mode Exit fullscreen mode

We iterate through the key values and get a uniform location for each one (or exit if there's no globals defined). If there's no location (wasn't present in the shader program) then we skip it. If it exists we need to give it the right type. There's a few basic types in GLSL but to keep the API simple this will only inject a single float or a vector of floats, no ints or matrices (bools are just non-zero values). So we check if we have an array, if not it's a single float otherwise we check the length to decide if it's a single value or a vec2, vec3 or vec4 and use the appropriate method.

We also need to make sure that binding and rendering happen at the same time so that we can re-render with new values. For this I combine this and render:

//make sure you bind these methods to the custom element class!
update(){
    if(!this.context) return;
    this.createUniforms();
    this.render();
}
Enter fullscreen mode Exit fullscreen mode

Initial attribute updates will trigger before connectedCallback so we need a guard to prevent it.

Now we just hook it up to the animation loop the same way:

const protanomalyRgbJs = document.querySelector("#protanomaly-rgb-js");
const protanomalyLmsJs = document.querySelector("#protanomaly-lms-js");
const protanomalyRgbGlsl = document.querySelector("#protanomaly-rgb-glsl");

let last = performance.now();
function draw() {
    requestAnimationFrame(() => {
        const now = performance.now();
        protanomalyRgbJs.globals = { t: (now % 5000) / 5000 };
        protanomalyLmsJs.globals = { t: (now % 5000) / 5000 };
        protanomalyRgbGlsl.globals = { t: (now % 5000) / 5000 };
        draw();
    });
}
draw();
Enter fullscreen mode Exit fullscreen mode

And it works the same way.

SVG

SVG isn't going to work the same way. Basically we're going to need to do the math in JS and then update the SVG filter.

Calculating in JS, updating the DOM

<svg height="0" width="0">
    <defs>
        <filter id="protanomaly-filter" color-interpolation-filters="sRGB">
            <feColorMatrix type="matrix" id="protanomaly-color-matrix" />
        </filter>
    </defs>
</svg>
<img src="./img/color-test.svg" style="filter: url(#protanomaly-filter);">
Enter fullscreen mode Exit fullscreen mode

We're starting by setting up the SVG, filter and color matrix element and applying the filter to the image.

In JS we can compute the matrix almost like normal:

//This is feColorMatrix 5x4 format!
function computeProtanomaly(t){
    const protanopia = [
        [0.1120, 0.8853, -0.0005, 0, 0],
        [0.1126, 0.8897, -0.0001, 0, 0],
        [0.0045, 0.0001, 1.00191, 0, 0],
        [0, 0, 0, 1, 0]
    ];
    const normalVision = [
        [1, 0, 0, 0, 0],
        [0, 1, 0, 0, 0],
        [0, 0, 1, 0, 0],
        [0, 0, 0, 1, 0]
    ];

    return lerp(normalVision, protanopia, t);
}
Enter fullscreen mode Exit fullscreen mode

Remember that feColorMatrix uses 5x4 matrices so this is somewhat specific. Basically each row has a 0 value tacked on.

I'll omit the boilerplate but the updating inside the request animation frame works the same way:

const protanomalyColorMatrix = document.querySelector("#protanomaly-color-matrix");
//...
//in requestAnimationFram
protanomalyColorMatrix.setAttribute("values", computeProtanomaly((now % 5000) / 5000).flat().join(" "));
Enter fullscreen mode Exit fullscreen mode

We do a flat to remove the nested arrays and then join to produce the string of 20 values that make up the feColorMatrix's values property.

This should now work too.

Can we use pure CSS?

I was hoping to be able to do this before my hopes were dashed. Let's start with something relatively simply and move our way up:

<style>
    #css-filter-test {
        filter: url('data:image/svg+xml,\
            <svg xmlns="http://www.w3.org/2000/svg">\
                <filter id="x" color-interpolation-filters="sRGB">\
                    <feColorMatrix values="\
                        0.1120 0.8853 -0.0004 0 0\
                        0.1126 0.8897 -0.0001 0 0\
                        0.0045 0.0000 1.00191 0 0\
                        0 0 0 1 0\
                    " />\
                </filter>\
            </svg>#x');
    }
</style>
<img src="./img/color-test.svg" id="css-filter-test">
Enter fullscreen mode Exit fullscreen mode

This will apply a protanopia filter to the image using just CSS. We embed the SVG into the CSS filter as a data-url. Things to keep in mind here are the MIME type (image/svg+xml), the xml name space http://www.w3.org/2000/svg is required since it's not in the DOM and we need to use \ to not terminate on new-lines to keep this somewhat readable, and it might be hard to see but the url ends with #x which is the fragment that identifies the filter with id x and we need that too. If you try this technique for other things be careful around quoting and using # or ? inside the SVG text as that will break things as it will be interpreted as part of the URL. You'll need to url-encode those characters if present.

However, here is where things end. There is no way to use the value of a custom property here. In CSS url(...) is parsed as a single token. What this means is that we cannot use a concat operation to change the url, it has to be an entire valid URL. Even @property doesn't help as syntax type <url> will still need an entire URL (eg url(var(--foo)) literally produces "url(var(--foo))" without substitution). Without this ability to substitute partial values we're stuck. We can certainly do our JS math and make the SVG in CSS-land but we can't do anything natively in CSS. Stick that one on the wish list, a concatable url syntax.

Demo: https://gh.ndesmic.com/cvd-sim/partial
Code: https://github.com/ndesmic/cvd-sim

Top comments (0)