Accessibility First DevRel. I focus on ensuring content created, events held and company assets are as accessible as possible, for as many people as possible.
Really interesting but I am not sure what processing it is doing?
I tried a mainly white image, 0.75 alpha suggested. 👍
I tried an all black image (except for a tiny bit in the middle that was green), 0.72 alpha. ❓ Surely this should have been next to zero?
Is there something I have misunderstood as I thought the idea was to take the combined alpha background + image pixel data behind the overlay to get a minimum contrast ratio?
Or does it find the lightest pixel on the image and adjust the alpha to account for that? That would make sense given the result.
Either way the concept is promising and it could easily be that I am doing it wrong 🤣
Accessibility First DevRel. I focus on ensuring content created, events held and company assets are as accessible as possible, for as many people as possible.
If it has the #a11y tag more than likely you will see me pop up! I float around here trying to help (or writing essays for people like I did with you - sorry again 🤣) with accessibility, see what people are up to in the space, steal ideas like the ones in this article 😉 etc.
Hey, thanks for your interest. Good question! So the processing looks for the pixel in the image with the lowest contrast to the text colour. This is a conservative approach. The reason for choosing a conservative value is that the text can appear in a different location relative to the background image depending on factors like the size of the user's screen and which font their browser chooses. I am keen to refine and need to think a smart way to do it, keeping thing accessible for as many users as possible. Hope that answers your question.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Really interesting but I am not sure what processing it is doing?
I tried a mainly white image, 0.75 alpha suggested. 👍
I tried an all black image (except for a tiny bit in the middle that was green), 0.72 alpha. ❓ Surely this should have been next to zero?
Is there something I have misunderstood as I thought the idea was to take the combined alpha background + image pixel data behind the overlay to get a minimum contrast ratio?
Or does it find the lightest pixel on the image and adjust the alpha to account for that? That would make sense given the result.
Either way the concept is promising and it could easily be that I am doing it wrong 🤣
I thought there might be a comment from you 😂
If it has the #a11y tag more than likely you will see me pop up! I float around here trying to help (or writing essays for people like I did with you - sorry again 🤣) with accessibility, see what people are up to in the space, steal ideas like the ones in this article 😉 etc.
Hey, thanks for your interest. Good question! So the processing looks for the pixel in the image with the lowest contrast to the text colour. This is a conservative approach. The reason for choosing a conservative value is that the text can appear in a different location relative to the background image depending on factors like the size of the user's screen and which font their browser chooses. I am keen to refine and need to think a smart way to do it, keeping thing accessible for as many users as possible. Hope that answers your question.