DEV Community

Cover image for Building an AI powered WebGL experience with Supabase and React Three Fiber
Niklas Lepistö
Niklas Lepistö

Posted on

Building an AI powered WebGL experience with Supabase and React Three Fiber

TL;DR:

  • Baldur's Gate 3 inspired WebGL experience called Mirror of Loss
  • Demo video
  • GitHub repo React Three Fiber with Drei makes things simple Tech stack used: React, React Three Fiber, Vite, Supabase, OpenAI, Stable Diffusion, Stable Audio

I recently participated in the Supabase Launch Week X Hackathon where I ended up dabbling in WebGL and created a project called Mirror of Loss. I had a lot of fun working on it, and figured it would be nice to share a bit about it here, too.

I've participated in three different Launch Week Hackathon's before and I've always tried to do something a bit outside my regular web dev work. Usually they turn out to be more of an experience than an app. The hackathons run for a week, so it's a good time to focus on and learn something new and cool!

Note: this article will not go into every detail how to build a similar kind of WebGL app with React Three Fiber and Supabase. For example, installation instructions can be found on the libraries own websites instead of being mentioned here.

This article just provides the bigger picture of how you can build WebGL apps & experiences by sharing my experience with it during the Supabase hackathon.

Not all the code will be displayed here, as the article would grow as long. It is, however, open sourced, so you can find all the little details in the GitHub repo.

The idea

Recently I've been indulging on Baldur's Gate 3, and what a better way to show your appreciation as a fan than to create something (close) from it! In the game they introduced a Mirror of Loss (spoiler warning), and me being very intrigued by the aesthetics (and the whole story/concept) of Shar in the game, I thought it would be nice to do a representation of it myself. And in 3D/WebGL! However, I wasn't planning on doing this during the hackathon: I just wanted to make it a new art project for myself. In the end I decided to try and create this within a week, since it seemed like a good time to do it.

Preparation

So before the hackathon started, around one or two weeks prior, I started wondering how to build this thing in WebGL. I was aware of Three.js, which I had dabbled a bit in the past, however it seemed a bit intimidating. No way I'd have time to learn the vanilla way of creating WebGL experiences. Luckily I had head about React Three Fiber before, although hadn't payed a lot of attention to it. And boy, I was really happy with what I read in their documentation! Everything seemed to abstract the tedious bits in Three.js to an easy to use React components, and I can do React for sure. They provide a lot of additional helper libraries, such as Drei, Ecctrl, to make developing a lot easier.

Drei is a collection of various, ready-made abstractions of React Three Fiber, and includes things like trails, making something face camera always, called Billboard, and animated distort material for example. Ecctrl on the other hand allows you to set up a character controller very quickly. I recommend checking out both of these if you are planning to do any React Three Fiber work.

Tip: Easiest way to get started is to setup up your project with Vite.

Other than that, my plan was to do two different scenes: one with a mirror for which the user can give a memory to, and one "inside" the mirror where you can see every single memory the mirror holds. Thought this would be a pretty cool concept, and with this in mind, I started experimenting a bit.

I started playing around and see if I could create some elements I'd like to use in the scene. I was mostly obsessed about creating a brazier, because you gotta have braziers! Below is what I come up with. It's very simple, however has a nice vibe to it. I didn't have time to study how to create a real looking flame via shaders, so I had to be quite creative in this. Basically it is just two stacked Cone geometries with some transparency.

Image description

It came nicely together in React Three Fiber with Lathe, Cone, and MeshDistortMaterial components.

const lathePoints = [];

for (let i = 0; i < 11; i++) {
  lathePoints.push(new Vector2(Math.sin(i * 0.2) * 2, (i - 5) * 0.2));
}

return (
  <>
    <Lathe args={[lathePoints]} rotation={[0, 0, 0]} position={[0, 1.2, 0]}>
      <meshLambertMaterial color="#777" />
    </Lathe>

    <pointLight color="#fff" intensity={10} position={[0, 5, 0]} />

    <Cone args={[1.5, 2, undefined, 50]} position={[0, 2.75, 0]}>
      <MeshDistortMaterial
        distort={0.5}
        speed={10}
        color="#baa8ff"
        transparent
        roughness={0.01}
        transmission={4.25}
        ior={1} />
    </Cone>
    <Cone args={[1, 1.5, undefined, 50]} position={[0, 2.25, 0]}>
      <MeshDistortMaterial
        distort={0.75}
        speed={10}
        color="#fff"
        roughness={0.8}
        ior={7} >
    </Cone>
</>
)
Enter fullscreen mode Exit fullscreen mode

At this point I really had no idea what I was doing, however felt surprisingly confident that I could do some cool stuff for the hackathon.

Starting the project, and the Mirror scene

First off when the hackathon kicked off, I thought that creating a 3D model of the mirror will play an important part in this as I could quickly generate textures for other, simple objects (e.g. walls, pillars, etc.) via an AI. So I fired up Spline and got to work.

After spending several hours on the model, I now just needed to import it to the project and adding some materials to it. Before you can use GLTF files in your project, you'll need to tell Vite to include them in the assets like so:

// https://vitejs.dev/config/
export default defineConfig({
  assetsInclude: ["**/*.gltf"],
  plugins: [react()],
});
Enter fullscreen mode Exit fullscreen mode

It's a bit tedious process to create meshes for the GLTF model by hand, and luckily the Poimandres collective have also created GLTFJSX library to help in that regard. They even have a website to test it out, which I just ended up using directly. It prints out a nicely grouped meshes, which can be altered individually.

At this point, after adding some initial materials to the model as you can see in the latter commit link, I realized that this is going to look very bad if I can't nail the materials perfectly. The alternative route would be creating something with a more oldschool vibe, and mixing 2D with 3D. Basically using sprites in a 3D environment like what they did in old games such as Wolfenstein, Duke Nukem, and Hexen. I especially have always liked the aesthetics of the last game mentioned, so I decided that I could try it out quickly to see if I could make it work.

Here is where DreamStudio by Stability AI, or any current AI solution really, helps out a lot. With just a few prompts, you are able to generate pixel art textures that you can use anywhere. Below are few examples what I ended up generating and using.

Pillar textures

Mirror sprites

You of course need to edit the images a bit, since you want to use transparent images when using sprites for objects. For some repeating textures, however, you might get away with just using the generated entries directly. I myself used Piskel to remove the backgrounds, and later to create animations.

For example, the braziers needed to be animated as it would be a bit boring to have them just sit around with static flame on them. Basically, in order to create an animated sprite, this just means having each state (frame) of the animation lined up next to each other in one big, transparent file. Below is how the brazier file looks like.

Image description

Drei was super helpful again, as it comes with a SpriteAnimator component to handle the animation without needing to do it manually. You just give it some props where to start the animation from, how many frames there are, and what texture to use.

export default function Brazier({ position }: BrazierProps) {
  return (
    <Billboard position={position}>
      <pointLight intensity={50} position={[0, 1, 0.1]} castShadow />
      <SpriteAnimator
        autoPlay
        startFrame={0}
        loop
        numberOfFrames={5}
        scale={3}
        textureImageURL={brazier}
      />
    </Billboard>
  );
}

Enter fullscreen mode Exit fullscreen mode

The mirror itself is just a Circle component with a double sided mesh material. How Circle differs from a Sphere is that Circle is two-dimensional, and Sphere three-dimensional. Same goes with Plane and Box, former which I used for bunch of other elements in the scenes. And since I'm working with sprites, I don't want to use textures on a 3D objects as that would wrap it around the object instead of displaying it as intended in the image.

<Circle args={[5, 100]}>
  <MeshReflectorMaterial
   map={texture}
   mirror={0.1}
   alphaTest={0.1}
   side={DoubleSide}
  />
</Circle>
Enter fullscreen mode Exit fullscreen mode

I didn't wrap it in an earlier mentioned Billboard component as, even though working with sprites is more in the 2D realm, you can get a 3D feel by keeping some 2D objects stationary. For this it's important to have a backside for the 2D object, too. If you didn't, and the the camera moved behind the object, it would disappear because there is nothing to render in that direction of the 3D space: the item is facing forward, and since it is 2D object, it does not have any points to draw in the opposite direction. Using side prop and e.g. DoubleSide as the value on the mesh renders the given texture on both sides of the 2D object, making it visible from all angles. Note that this DoubleSide makes object look exactly the same from the front and the back. If you want to have a different looking backside, you'll need to create a separate 2D object with a backside texture, and BackSide as the side prop value.

Tip: Use alphaTest prop to make the mesh material transparent! If you don't set it, and you are using a texture with transparent background, it will get a black background instead.

With some initial objects created, I set up my scene a bit further to see how it would look like. And it was looking pretty decent!

Initial scene

In the GIF above you can see that I initially used 3D pillars (Box component with the pillar texture slapped on it), however I later on decided that they didn't fit in when I added more elements to the scene.

Next up I wanted to work on the "memories" you give to the mirror: how do they appear, what do they look like, how are they passed to the mirror.

First I started with the styling. I wanted to make them something like they were "extracted" from you, something like blobs that would just float around, and be a bit blurry, but just seeing some imagery inside them. And luckily for us, Drei supports these with some premade components!

There is a Float component which allows you to wrap any geometry and make them float: no manual calculations needed! You can adjust the speed, rotation, and floating intensities. Then there is also a MeshTransmissionMaterial component that allows you to create see-through materials. These see-through materials can also warp or adjust the objects/imagery behind/inside based on lighting, etc. This allows you to create some pretty good looking things! With a lot of trial and error I ended up with something like in the picture a below. Don't mind the different looking scene, we'll get to that in a bit.

Initial blob styles

function Blob({ imageUrl, position, visible }: BlobProps) {
  const texture = useTexture(imageUrl);

  return (
    <group visible={visible}>
      <Float speed={5} rotationIntensity={0.05} floatIntensity={1}>
        <group position={position}>
          <Sphere args={[0.33, 48, 48]} castShadow>
            <MeshTransmissionMaterial
              distortionScale={1}
              temporalDistortion={0.1}
              transmission={0.95}
              color={"#fbd9ff"}
              roughness={0}
              thickness={0.2}
              chromaticAberration={0.2}
              anisotropicBlur={0.5}
              distortion={1.2}
            />
          </Sphere>

          <Sphere args={[0.2, 48, 48]}>
            <meshPhysicalMaterial map={texture} roughness={0.1} />
          </Sphere>
        </group>
      </Float>
    </group>
  );
}
Enter fullscreen mode Exit fullscreen mode

So my blobs are basically just two Sphere geometries on top of each other. One is using the see-through material, and one is a sphere with the image texture. The see-through material on top of the image gives it a nice look that it's actually "living" inside a sphere, and it allows me to achieve that "memory-like" feel.

Tip: Layering order of the materials and geometries matter! > Play around to see what gets you the best results.

At this point we're just moving our camera with basic controls (mainly just zooming and rotating with the mouse), however to make it more immersive, we'll need to add a character. Here we'll be using the previously mentioned Ecctrl library. In addition to installing Ecctrl, it needs Rapier, a physics engine. You'll also need to setup KeyboardControls, which can be exported from Drei.

Setting up keyboard controls is easy: you'll just add the KeyboardControls inside your Canvas component, and make it wrap your scene. Then give it a map of keys to use, and you're good to go! Mine looks something like this:


const keyboardMap = [
  { name: "forward", keys: ["ArrowUp", "w", "W"] },
  { name: "backward", keys: ["ArrowDown", "s", "S"] },
  { name: "leftward", keys: ["ArrowLeft", "a", "A"] },
  { name: "rightward", keys: ["ArrowRight", "d", "D"] },
  { name: "jump", keys: ["Space"] },
  { name: "run", keys: ["Shift"] },
  { name: "crouch", keys: ["c", "C"] },
];

<Canvas>
  <KeyboardControls map={keyboardMap}>
    <ambientLight color={"#fff"} />
    <Suspense fallback={null}>
      <SceneContextProvider>
        <MainScene />
      </SceneContextProvider>
     </Suspense>
  </KeyboardControls>
</Canvas>
Enter fullscreen mode Exit fullscreen mode

Then you'll be needing to add physics so that your character can move properly in the environment. The Controller requires to wrap some sort of a character model in order it to work, so a simple Sphere can suffice in this case. I set it to be invisible, so you don't see it in any material reflections: you just float around. I adjusted the movement speed from the defaults a bit since my scene ended up being pretty big, and without sprinting it would take a long while to get to the mirror.

import { CuboidCollider, Physics, RigidBody } from "@react-three/rapier";
import Controller from "ecctrl";

<Physics gravity={[0, -30, 0]}>
{/* @ts-expect-error the export is slightly broken in TypeScript so just disabling the TS check here */}
  <Controller
    characterInitDir={9.5}
    camInitDir={{ x: 0, y: 9.5, z: 0 }}
    camInitDis={-0.01}
    camMinDis={-0.01}
    camFollowMult={100}
    autoBalance={false}
    camMaxDis={-0.01}
    sprintMult={2}
    maxVelLimit={askForMemories ? 0 : 15}
    jumpVel={askForMemories ? 0 : undefined}>
    <Sphere>
      <meshStandardMaterial transparent opacity={0} />
    </Sphere>
  </Controller>

  {/* other stuff.. */}

  {/* floor */}
  <RigidBody type="fixed" colliders={false}>
    <mesh receiveShadow position={[0, 0, 0]} rotation-x={-Math.PI / 2}>
      <planeGeometry args={[22, 100]} />
      <meshStandardMaterial map={flooring} side={DoubleSide} />
    </mesh>
    <CuboidCollider args={[1000, 2, 50]} position={[0, -2, 0]} />
  </RigidBody>

 {/* other stuff.. */}
</Physics>
Enter fullscreen mode Exit fullscreen mode

Tip: To get the first-person view, you'll need the following props.

<Ecctrl
  camInitDis={-0.01} // camera intial position
  camMinDis={-0.01} // camera zoom in closest position
  camFollowMult={100} // give any big number here, so the camera follows the character instantly
  autoBalance={false} // turn off auto balance since it's not useful for the first-person view

The cool thing about these physics and colliders, is that you can use them as sensors, too. Say you want to trigger some event when the player enters a specific area. You can just define a RigidBody element with a geometry, and a collider (e.g. CuboidCollider) which you mark as a sensor. Then for that collider, you also give a onIntersectionEnter prop a function that will be triggered once the player is inside that collider.

For example, in my case I wanted to make the player unable to move, and focus on an input field so that they can just type without moving the character. I ended up with this simple thing:

<RigidBody
  type={"fixed"}
  position={[0, -3.5, -85.5]}
  rotation={[-Math.PI / 2, 0, 0]}
>
  <Plane args={[4, 3]}>
    <meshStandardMaterial transparent opacity={0} />
  </Plane>

  <CuboidCollider
    args={[2, 2, 1]}
    sensor
    onIntersectionEnter={() => setAskForMemories(true)}
  />
</RigidBody>
Enter fullscreen mode Exit fullscreen mode

So whenever the player enters that 2x2x1 collider in, it will update the state and freeze the character.

Tip: If you don't set type={"fixed"} for the RigidBody,
the collider will collide with other physics and cause some weird behavior on mount.

For example, my collider is slightly inside the mirror stand, and would just fly high up in the sky when the scene loaded. Setting it as fixed would keep it static and fixed in its place.

Next up would be generating images via an Edge Function (as I wanted it this to be without a login and so that not everyone can insert stuff to the database with the anon key), and then displaying the generated images as blobs in realtime. Here I of course used the wonderful Supabase Realtime feature. Below is an example of the wrapping component in order to display the blobs as they appear.

export default function MemoryBlobs() {
  const [memories, setMemories] = useState([]);

  useEffect(() => {
    const channel = supabase
      .channel("memories")
      .on(
        "postgres_changes",
        {
          event: "INSERT",
          schema: "public",
          table: "memories",
          filter: `player_id=eq.${localStorage.getItem("uuid")}`,
        },
        (payload) => {
          setMemories((prev) => prev.concat([payload.new]));
        }
      )
      .subscribe();

    return () => {
      channel.unsubscribe();
    };
  }, []);

  return memories.map((memory: { id: number; image: string }) => (
    <Blob
      key={memory.id}
      position={[0, -2, -84]}
      imageUrl={memory.image}
      visible
    />
  ));
}
Enter fullscreen mode Exit fullscreen mode

Very simple setup, as you can see. I will not show the whole Edge Function code here as it's a bit long, however you can check the source on GitHub to see how these images are generated. Shortly: we take in user input, pass it to OpenAI, pass its response to Stable Diffusion API, upload images generated by SD to Supabase Storage, and then insert them to the database. The database insertion then triggers the realtime updates, and the images appear as blobs on the screen.

If you take a look at the Edge Function code, you notice that I'm using the Fetch API to call OpenAI instead of the OpenAI SDK. This is for a reason: the SDK does not seem to work in these Edge Functions. It all works locally, however when you deploy the function to production and invoke it, it will crash to a FilesAPI undefined error. I'm not sure if my setup is a bit outdated, or if this is something that can be fixed by Supabase (or Deno?).

In order to take in user input, we need to have an input field, of course. It would be a task of its own to build something like that in WebGL, so it's easier to throw HTML in the mix. This can be done easily with the HTML component from Drei. With it, I could add a HTML Form element to my scene and start taking in user input to generate images for the memories.

<Html center position={[0, -1.95, -88]}>
  <form
    onSubmit={async (e) => {
      e.preventDefault();

      setGenerating(true);

      const data = new FormData(e.target);
      const input = data.get("memory");

      const { data: responseData } = await supabase.functions.invoke("generate-memories",
        {
          body: {
            input,
            playerId: localStorage.getItem("uuid"),
          },
        }
      );

      setGenerated(true);

      if (responseData) {
        localStorage.setItem(
          "memoryGroupId",
          JSON.stringify(responseData.memoryGroupId)
        );

        setTimeout(() => {
          setTransitionToVoid(true);
          }, 8000);
       }
     }}
   >
     <input
       className="memory-input"
       name="memory"
       style={{
         display: !askForMemories || generating ? "none" : undefined,
       }}
       ref={inputRef}
       type="text"
       placeholder="Think of a memory..."
       />
  </form>
</Html>
Enter fullscreen mode Exit fullscreen mode

With all this functionality, we can move to the Void scene.

Enter the Void

So in this scene I wanted the user to be in zero-gravity environment, and see these little blobs of memories floating around. Then when you click one, you would zoom in to see the name of the memory and date. In this view you would also have access to the generated images.

For the zooming part, I needed to create a separate camera controller that would allow me to animate things smoothly, and after googling a while how to do this, I found a nice library called camera-controls. You'd hook it to React Three Fiber's useFrame hook, and update the camera position and where it's looking at based on some given coordinates. You can see the implementation in the Controls component.

It's hooked to a context, which stores the current camera position and look-at values from the blob click event, and then it does this nice transition when it needs to update from the previous position. The component also contains keyboard event handling to have that zero-gravity feel when moving the camera.

You'd update the camera by sending new three-dimensional vector values to the context in some component like in MemoryGroups for example.

setCamPos(new Vector3(position[0], position[1], position[2] + 20));
setLookAt(new Vector3(position[0], position[1], position[2]));
Enter fullscreen mode Exit fullscreen mode

Positioning the groups of memories would be next, and it turned out to be one of the coolest things about this project in my opinion. If you've tried the app, you notice that the group spheres place out quite evenly in the space, and they don't really overlap each other. This is thanks to the pre-installed PostGIS plugin on the database.

Even before realizing that I can use PostGIS for the locations, I wanted to make the groups appear in random locations, and they shouldn't overlap each other. My initial idea was that I'd store XYZ coordinates in their own database columns, and I would just check in my code if there are any overlaps with any of the database rows within a given range. Doable? Sure. Reasonable? Maybe not. And here where I realized that since I'm actually working with coordinates, I can use PostGIS directly to handle these spatial coordinates.

PostGIS comes with a built-in function to check if some coordinates overlap each other, which made this whole thing a lot simpler: I could just let the database handle everything! Only thing I needed to do was to send in the given text for the memory, the player ID, and the group would automatically get assigned to some random place in the 3D environment. Of course, since this was a hackathon, and it was my first time using PostGIS, I actually asked the AI Assistant to generate a database function for me! It didn't work straight out of the box, however it was 99.9% there. Very cool and impressive, so kudos to the Supabase team for this feature.

  const { data: memoryGroupId } = await supabaseClient.rpc(
    "insert_memory_group",
    { memory: input, player_id: playerId }
  );
Enter fullscreen mode Exit fullscreen mode
CREATE OR REPLACE FUNCTION public.insert_memory_group(memory text, player_id uuid)
 RETURNS bigint
 LANGUAGE plpgsql
AS $function$
DECLARE
  random_coordinates geometry;
  id bigint;
BEGIN

  LOOP
    -- Generate random coordinates
    random_coordinates := ST_MakePoint(
      random() * 180 - 90,
      random() * 180 - 90,
      random() * 180 - 90
    );

    -- Check for intersecting geometries
    IF NOT EXISTS (
      SELECT 1
      FROM memory_groups
      WHERE ST_Intersects(position, random_coordinates)
    ) THEN
      -- Insert and return data
      INSERT INTO memory_groups (memory, position, player_id)
      VALUES (memory, random_coordinates, player_id)
      RETURNING memory_groups.id into id;

      RETURN id;
      EXIT;
    END IF;
  END LOOP;

END;
$function$
;
Enter fullscreen mode Exit fullscreen mode

Tip: Use Supabase's AI Assistant, it's amazing, and will only get better the more you use it.

However, while these coordinates (or geometries) are now stored properly, you cannot really use them as is in the code. This is because the stored format isn't a regular float for each axis: it's mix of numbers and letters, for example 01010000A0E6100000404C76755AFF35C03C48167A39544940DCF3805DD9663240. So in order to use these Points in our app, we'll need to convert these to floats. Here I used another database function to do the conversion:

CREATE OR REPLACE FUNCTION public.memory_groups_with_position()
 RETURNS TABLE(id integer, memory text, created_at date, x double precision, y double precision, z double precision)
 LANGUAGE sql
AS $function$
   select id, memory, created_at, st_x(position::geometry) as x, st_y(position::geometry) as y, st_z(position::geometry) as z from public.memory_groups;
$function$
;
Enter fullscreen mode Exit fullscreen mode

In the code, when the scene loads, I'd just fetch the groups with an RPC call via Supabase. I also hooked it to the realtime feature, so the scene automatically updates with the latest added memory group if you happen to be there and someone else gives mirror some memory.

  useEffect(() => {
    async function fetchMemoryGroups() {
      const { data } = await supabase
        .rpc("memory_groups_with_position")
        .limit(1000);

      if (data) {
        setMemoryGroups((prev) => prev.concat(data));
      }
    }

    fetchMemoryGroups();
  }, []);

  useEffect(() => {
    const channel = supabase
      .channel("memory_groups")
      .on(
        "postgres_changes",
        { event: "INSERT", schema: "public", table: "memory_groups" },
        async (payload) => {
          const { data } = await supabase
            .rpc("memory_groups_with_position")
            .eq("id", payload.new.id)
            .single();

          setMemoryGroups((prev) => prev.concat([data]));
        }
      )
      .subscribe();

    return () => {
      channel.unsubscribe();
    };
  }, []);
Enter fullscreen mode Exit fullscreen mode

You'll notice that I'll do another fetch for the newly added memory_group after getting notified by the INSERT event in database. This is because of what I mentioned earlier: since the group position is a geometry, I cannot use them directly to position them in the 3D space. Instead, I just use the RPC call to fetch the newly added group, which works perfectly in this case.

After all this, with bunch of little tweaks here and there, adding "transitions" between the scenes, adjusting functionalities, generating music with an AI, being desperate at why the Edge Function is not working at 7am in the morning after working on the project for the whole night, the end result ended up being something like seen in this demo video. It shows a bit more optimal experience since it's running locally, however I'm really happy how it turned out. I had a vision, and managed to complete it to my liking, which is always amazing.

I've left out quite a lot of details since there is a lot of code and things going on, so make sure to check out the GitHub repo for all the missing parts.

If you got this far, thanks for reading! Feel free to add comments if you have anything in mind that you wanna say.

Top comments (0)