OK, even I think I have gone too far this time!
Can we convert video to emojis (silly) and then save those emojis in a DB (a row per line of "pixels") and serve the "image" from the DB...apparently yes!
Should you do this? No.
Was it fun? A bit.
Will it make you smile? I hope so, it was a little more work than I though it was going to be!
To give you an idea of how silly this was, I had to:
- convert a video to frames at a low resolution
- read each of those frames and get the pixel data
- convert that pixel data into emojis
- store those emojis in a DB (Postgres using Supabase)
- create a stored procedure to retrieve a frame so I could make API calls in quick succession to play a "video".
There is a lot to unpack here, and some interesting things we can learn along the way, but first...the demo!
Click the big play button in the following CodePen to see it in all of it's glory!
Demo
WARNING: This will use 20MB of data, so if you are on 4G, don't press play if you have an expensive data plan!
The most amazing part of that experiment is the fact that I am calling each frame individually, one after the other from Supabase, pretty impressive latency I have to admit!
More on how I store the data (and how I created it) later.
One last note before the demo
There is a reason we don't store video as emojis, it takes up loads of space.
As the above demo is on the free tier of Supabase it may suddenly stop working as 5GB of egress will only allow the video to be played 250 times before I use too much data!
If you press play and nothing happens, I probably hit that limit, here is a video of what you would have seen:
Interesting parts of this experiment.
*As always, this is not a tutorial. *
But, there were a few interesting things along the way that may be useful for reference:
Converting and storing the data
There were a few interesting things to consider here.
You see, storing pixel data as emojis is not very efficient (I eluded to this early).
So we needed to find a way to minimise the amount of data we stored.
Once I converted the video into frames and down-sampled it I was left with this:
And there were a couple of problems with that.
Removing the black bars.
Ahhh the 80s, where TVs were nearly square and mullets were fashionable.
Unfortunately some things have changed since then.
We now use 16:9 vs 4:3 aspect ratios, which means we have some huge black bars to contend with.
There is no point storing those as emojis.
Now, we will cover retrieving the pixel data later, but all you need to know is I retrieve it a row at a time and loop through each pixel on a row.
So in order to remove those black bars there were many things I could have done here.
But I like simple and my solution was this:
f(pixelCounter > 36 && pixelCounter < 219){
//do something with the pixel data otherwise ignore it.
}
As my image is 256 pixels wide, I simply skip over the first and the last 36 pixels, so I only process the 182 pixels on each row that have meaningful data.
Is it ugly? yes!
Is it inefficient? Yes!
Does it work? Also yes!
Removing some row data.
The other issue I had was size. (that's what she said...😱)
Even with the black bars removed I still had to store 182 pixels x 144 rows = 26,208 pixels!
The problem with this is that each pixel would be stored as 4 emojis, so we would actually need 104,832 emojis per frame. This was too much!
Luckily the answer was "simple", we are using 4 emojis per "emoji pixel", so we just sample 1 in every 4 pixels.
Now we can't sample every 4th pixel, it isn't that simple. If we did that then we would lose too much information horizontally.
So we actually sample a pixel every 2 pixels, every 2 rows.
So we sample:
- x1,y1
- x3,y1
- x5, y1
...
- x1, y3
- x3, y3
- x5, y3
...
etc.
That takes us down to just 6,552 pixels to sample, and with 4 emojis per pixel we are back at 23k characters per frame. That will do!
encoding the data
The next step is taking that pixel data and converting it to emojis.
Now one way I could have improved this was to take all the emojis and categorise them by colour (closest match to each RGB value).
But that was a little bit too much work for this silly experiment.
So instead what I did was pick just 5 emojis to work with: 🟥🟩🟦⬜⬛
For each pixel we would grab each of the RGB values and then based on the strength, either show a coloured emoji, or a black or white one.
The logic I went with in the end was as follows:
- check the red value.
- is it greater than 127 (half of 255).
- if "yes", then show the red emoji.
- if "no", then we show either white or black.
- repeat for Green and Blue.
Now you might be wondering, where do we decide whether to show black or white?
We do that by looking at the combined values of R + G + B.
If R + G + B > 500, show the white emoji, otherwise show the black emoji.
This works surprisingly well, for such a simple and "binary" expression of colour.
Putting it all together.
So listen...before I show you this, bear in mind that this is a single use, throwaway bit of code OK.
I don't want you doing a code review on this garbage.
We clear? Good. 🤣
There are a few things I haven't covered that this code does:
- loads an image from an image input onto a canvas and grabs the pixel data.
- putting image data onto the second canvas (hint
Uint8ClampedArray
is important for this...I got stuck on that for a while!) - grabbing the image data itself (Another useful tip: image data from a Canvas is stored in a "flat array" in 4 byte chunks, so you have to jump 4 array items for each pixel. e.g. R1, G1, B1, A1 then R2, G2, B2, A2).
- builds an ugly
INSERT
statement to insert our image data into a database table.
Here is an image you can download if you want to test it (as it ONLY works on this specific image size and removes black bars based on these images only.)
And here is a CodePen with the code I used to convert the frames...ugly but it works.
The last part, the database query
Honestly, this part was super simple thanks to Supabase, there are probably only a couple of points worth covering.
Storing and retrieving the data
I decided to make this a little more fun and store each line of pixels as a row in the database.
Which makes for a really fun visual:
The 2 columns we needed to make sure this worked were the frameid
column (so we could query all rows for a frame) and the rownum
column, so that we can order the rows of the frame.
Then it was as simple as adding a custom function in the SQL editor that retrieved this information:
create or replace function get_frames (frame integer)
RETURNS table(frametext text)
LANGUAGE plpgsql
as $$
BEGIN
RETURN QUERY SELECT STRING_AGG("rowdata", CHR(13)) AS frame
FROM video2
WHERE frameid = frame
GROUP BY frameid
ORDER BY frameid ASC;
end;
$$;
It might look complicated, but if we break it down it isn't so bad!
create or replace function get_frames (frame integer)
This first part let's us define a reusable function. This is important as we can then use this function as an API endpoint later!
RETURNS table(frametext text)
LANGUAGE plpgsql
as $$
Here we define the return type. The part in brackets corresponds to the columns we are returning (if we had multiple columns we would set the name we want the column to be called and the type for each).
RETURN QUERY SELECT STRING_AGG("rowdata", CHR(13)) AS frame
FROM video2
WHERE frameid = frame
GROUP BY frameid
ORDER BY frameid ASC;
end;
$$;
The last part is our SQL query (and returning it as this is a function).
The STRING_AGG
function is one you might not be familiar with. All it does is join columns and characters together, so we grab the rowdata
data and then combine that with CHR(13)
(a new line character) to build our output string.
The important part to make this work though is the GROUP BY
statement. Without this the STRING_AGG
function won't work as it won't know what data it should aggregate together over multiple rows.
And then we just have ORDER BY
- this just makes sure we get each row of data in order so the image makes sense!
Creating our query and API endpoint
The beauty of Supabase is that now we are just a step away from being able to use that function as an API endpoint.
We just need an access policy.
You can find this under "authentication" > "policies" in a project.
Now, I am fairly new to Postgres, but luckily Supabase had a useful template all set up that let me grant Read permissions.
And with that, I can now call that function as an API endpoint!
(if you aren't sure where to get that info, it is in "Database" > "Functions" and you will see a row similar to this:
Calling our API endpoint
Last thing we need, getting our data.
For that we need our SUPABASE_URL
and our key (SUPABASE_KEY
).
Took me a little while as I was new to Supabase, but they are located under "Project Settings" > "API".
Then all we need is the Supabase SDK (I got it here: https://cdn.jsdelivr.net/npm/@supabase/supabase-js@2.42.0/dist/umd/supabase.min.js) and to create a client with
var supabase = supabase.createClient(SUPABASE_URL, SUPABASE_KEY)
Now we are ready to go!
To call an API endpoint you use .rpc
.
So my function looks like this:
supabase.rpc('get_frames', { frame: num })
.then((data) => {
//do stuff
})
.catch((err) => {
// catch error
})
Where get_frames
is the name of the function I created earlier and { frame: num }
is the variable name and the data I want to pass in to the function.
And with that, we are done!
Video, encoded in emojis, stored in a DB, served via Supabase APIs!
Edit / note on performance
As @mattlewandowski93 pointed out, this is inefficient as I am requesting a frame at a time, which creates a lot of network requests.
I wanted to test out Supabase's latency, so it was deliberate.
We could change the function that grabs the frames to look like this:
create or replace function get_all_frames ()
RETURNS table(frametext text)
LANGUAGE plpgsql
as $$
BEGIN
--grab current board
RETURN QUERY SELECT STRING_AGG("rowdata", CHR(13)) AS frame
FROM video2
GROUP BY frameid
ORDER BY frameid ASC;
end;
$$;
To return all frames at once, saving on network requests.
Then we could render it a lot faster (at the expense of initial load time) as we have all the data at once.
It was a great point that I hadn't explained, so I thought I would include it here to explain why I did it this way.
You might be asking, WHY?
A valid question!
I like to learn through silly things like this.
I never really used Supabase and they asked me if I wanted to write an article for them to celebrate the Supabase launch week.
So I wanted to try them out. (in retrospect, doing something "normal" might have been a better idea so I didn't eat through my 5GB of free egress data, which would be very generous for most applications but not for this! haha).
I am sure when they asked me to write they were expecting a tutorial, or something useful, but we all know I don't do that!
Additionally, I also wanted to learn a little more about the <canvas>
element and how to manipulate pixel data, as I have an even sillier idea using that (Doom 3...in the browser...in emojis - yes, I am serious! 🤣).
Anyway, hopefully you learned a little bit too, but if not, I hope you at least enjoyed my silliness.
Oh, and last thing: I rick rolled you, in emojis.
That is a huge W for me and an even bigger L for you I reckon, so that is another reason why I did this! 🤣💗
Have a great week everyone and let me know what you thought in the comments.
Top comments (17)
This is great haha. I can't help but want to increase the performance even more.
Have you considered returning more than 1 frame of data from the API at a time? If the size isn't that big, you could request 60 frames at a time and you might get a smooth playback. Might have to also update your postgres fn though.
Also curious if you played around with different table formats and query performance. aggregating json objects can be pretty slow, but adding an index plus a third column for each individual px on the row might be faster!
Would love to see a 30fps version of this 😆
It is deliberately one frame at a time to test out Supabase latency.
The function is already written for all frames 😁
I changed the article and added a section to explain why I did it this was as it is a great point! 🙏🏼💗
Ah I see! seems to be the latency killing it here in Australia then. Getting about 1 second of latency per request.
Ahhhh, yeah, the server is US I chose.
Another optimisation we could do there, save the data as a file and serve it from a CDN.
Sorry it isn't as fun for you, it is only 90ms here in the UK so I didn't consider how much worse it could be there.
Hopefully the video shows you what it should look like 🥲
I'm surprised you didn't use
line-height
andletter-spacing
to reduce the gaps 😲Oh I did try that actually, but it lost the “charm” of being individual emojis.
Plus you get some strange lines due to it being such a small font size. 😢💗
I can't believe it plays out!! I was expecting a stop motion movie, but the framerate is actually not bad! Here it goes my next vacation week trying to reproduce this 😂
Yeah it is decent framerate on Wifi with a decent connection. But someone did point out that they were in Australia and the round trip time meant they saw 1FPS! Also sucks on 3G/4G! haha.
If you do reproduce it (and make it better!) send me a link, would love to see it! 💗
I've been Rick rolled 😮
Remoji rolled! 🤷🤣💗
And people wonder why nobody ever asks me to write for them!
Do me a favour, tweet at Supabase if you enjoyed this article, that way they might actually ask me to write for them again! 🤷🏼♂️🤣💗
This is an amazing blog, Graham!
🙏💗
The idea itself is insane. Amazing work!
Haha pretty much sums up this whole series! Glad you liked it though. 💗
Great project tbh !
That is the way to learn, doing crazy things, congrats!