DEV Community

Cover image for Serverless 3D WebGL rendering with ThreeJS
Rainer Selvet ðŸ‘Ļ‍ðŸ’ŧ
Rainer Selvet ðŸ‘Ļ‍ðŸ’ŧ

Posted on • Originally published at rainer.im

Serverless 3D WebGL rendering with ThreeJS

3D model of a helmet

This image above was rendered in a serverless function on page load (not kidding, check the image source) ðŸĪ“


This post originally appeared on https://www.rainer.im/blog/serverless-3d-rendering.


3D rendering is a high-cost task, often taking a long time to compute on GPU accelerated servers.

Browsers are becoming more capable. The web is more powerful than ever. And serverless is the fastest-growing cloud service model. There must be a way to take advantage of these technologies for rendering 3D content for cheap at scale.

Here's the idea:

  • Create a React app and display a 3D model using react-three-fiber
  • Create a serverless function which runs a headless browser displaying WebGL content
  • Wait for WebGL content to load and return the rendered image

We'll be using NextJS for this.

The final project is on GitHub.


3D viewer

Let's start by creating a new NextJS application. We'll bootstrap the project from the NextJS typescript starter.

npx create-next-app --ts
# or
yarn create next-app --typescript
Enter fullscreen mode Exit fullscreen mode

Running npm run dev should present you with the "Welcome to NextJS" page. Cool.

Let's create the page that's going to display a 3D model.

touch pages/index.tsx
Enter fullscreen mode Exit fullscreen mode
// pages/index.tsx

export default function ViewerPage() {
  return <></>;
}
Enter fullscreen mode Exit fullscreen mode

To keep things simple we'll be using React Three Fiber and Drei, a collection of helpers and abstractions around React Three Fiber.

Let's install both dependencies:

npm install three @react-three/fiber
npm install @react-three/drei
Enter fullscreen mode Exit fullscreen mode

Let's set up the 3D viewer. We'll use the Stage component to get a nice rendering environment.

// pages/index.tsx

import { Canvas } from "@react-three/fiber";
import { Stage } from "@react-three/drei";
import { Suspense } from "react";

export default function ViewerPage() {
  return (
    <Canvas
      gl={{ preserveDrawingBuffer: true, antialias: true, alpha: true }}
      shadows
    >
      <Suspense fallback={null}>
        <Stage
          contactShadow
          shadows
          adjustCamera
          intensity={1}
          environment="city"
          preset="rembrandt"
        ></Stage>
      </Suspense>
    </Canvas>
  );
}
Enter fullscreen mode Exit fullscreen mode

Now, we'll need to load a 3D model. We'll be loading a glTF asset, a transmission format that's evolving into the "JPG of 3D assets". More on that in future posts!

Let's create a component to load any glTF asset:

mkdir components
touch components/gltf-model.tsx
Enter fullscreen mode Exit fullscreen mode

We'll also traverse the glTF scene graph to enable shadow casting on meshes of the glTF:

// components/gltf-model.tsx

import { useGLTF } from "@react-three/drei";
import { useLayoutEffect } from "react";

interface GLTFModelProps {
  model: string;
  shadows: boolean;
}

export default function GLTFModel(props: GLTFModelProps) {
  const gltf = useGLTF(props.model);

  useLayoutEffect(() => {
    gltf.scene.traverse((obj: any) => {
      if (obj.isMesh) {
        obj.castShadow = obj.receiveShadow = props.shadows;
        obj.material.envMapIntensity = 0.8;
      }
    });
  }, [gltf.scene, props.shadows]);

  return <primitive object={gltf.scene} />;
}
Enter fullscreen mode Exit fullscreen mode

We'll be using a glTF asset downloaded from KhronosGroup glTF sample models here.

Let's add the GLB (binary version of glTF) to the /public directory. You could also pass a GLB hosted elsewhere to the useGLTF hook.

You might need to install npm i @types/three for the type checks to pass.

Let's add the GLTFModel to our viewer page:

// pages/index.tsx

import { Canvas } from "@react-three/fiber";
import { Stage } from "@react-three/drei";
import { Suspense } from "react";
import GLTFModel from "../components/gltf-model";

export default function ViewerPage() {
  return (
    <Canvas
      gl={{ preserveDrawingBuffer: true, antialias: true, alpha: true }}
      shadows
    >
      <Suspense fallback={null}>
        <Stage
          contactShadow
          shadows
          adjustCamera
          intensity={1}
          environment="city"
          preset="rembrandt"
        >
          <GLTFModel model={"/DamagedHelmet.glb"} shadows={true} />
        </Stage>
      </Suspense>
    </Canvas>
  );
}
Enter fullscreen mode Exit fullscreen mode

Update the styles/globals.css to set the canvas to screen height:

// styles/globals.css

html,
body {
  padding: 0;
  margin: 0;
  font-family: -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Oxygen, Ubuntu,
    Cantarell, Fira Sans, Droid Sans, Helvetica Neue, sans-serif;
}

a {
  color: inherit;
  text-decoration: none;
}

* {
  box-sizing: border-box;
}

canvas {
  height: 100vh;
}
Enter fullscreen mode Exit fullscreen mode

With that in place, you should now see the 3D model rendered on http://localhost:3000/

Helmet 3D model

Serverless rendering

Let's leverage the client-side 3D viewer and provide access to 2D rendering through an API.

To keep things simple, the API will take any 3D model URL as input and return an image of that 3D model as the response.

API

GET: /api/render?model={URL}

Response: image/png

Create the API route

mkdir api
touch api/render.ts
Enter fullscreen mode Exit fullscreen mode

⚠ïļ Note that we're creating a new api directory and not using the existing pages/api. This is to avoid functions sharing resources and exceeding the serverless function size limit on Vercel (where we'll be deploying the app to). More info here and here.

⚠ïļ Also, in order for serverless functions to be picked up from the root directory you'll need to run
vercel dev locally to test the API route (as opposed to npm run dev).

Let's set up the initial function:

// api/render.ts

import type { NextApiRequest, NextApiResponse } from "next";

export default (req: NextApiRequest, res: NextApiResponse) => {
  res.status(200).json({ name: "Hello World" });
};
Enter fullscreen mode Exit fullscreen mode

With this, you already have an API route live on http://localhost:3000/api/render.

Behind the scenes, the rendering is going to happen in an AWS Lambda function. Hence we need to use a custom-built Chromium version to handle the headless browser.

Let's install the dependencies:

npm i chrome-aws-lambda
npm i puppeteer
Enter fullscreen mode Exit fullscreen mode

Let's finalize our render function:

import type { NextApiRequest, NextApiResponse } from 'next'
const chrome = require('chrome-aws-lambda')
const puppeteer = require('puppeteer')

const getAbsoluteURL = (path: string) => {
  if (process.env.NODE_ENV === 'development') {
    return `http://localhost:3000${path}`
  }
  return `https://${process.env.VERCEL_URL}${path}`
}

export default async (req: NextApiRequest, res: NextApiResponse) => {
  let {
    query: { model }
  } = req

  if (!model) return res.status(400).end(`No model provided`)

  let browser

  if (process.env.NODE_ENV === 'production') {
    browser = await puppeteer.launch({
      args: chrome.args,
      defaultViewport: chrome.defaultViewport,
      executablePath: await chrome.executablePath,
      headless: chrome.headless,
      ignoreHTTPSErrors: true
    })
  } else {
    browser = await puppeteer.launch({
      headless: true
    })
  }

  const page = await browser.newPage()
  await page.setViewport({ width: 512, height: 512 })
  await page.goto(getAbsoluteURL(`?model=${model}`))
  await page.waitForFunction('window.status === "ready"')

  const data = await page.screenshot({
    type: 'png'
  })

  await browser.close()
  // Set the s-maxage property which caches the images then on the Vercel edge
  res.setHeader('Cache-Control', 's-maxage=10, stale-while-revalidate')
  res.setHeader('Content-Type', 'image/png')
  // Write the image to the response with the specified Content-Type
  res.end(data)
}
Enter fullscreen mode Exit fullscreen mode

Here's what happens in the function

  • Launch Lambda optimized version of Chrome in a serverless environment or via puppeteer when developing locally
  • Navigate to a URL displaying the 3D model passed in the query parameter
  • Wait for 3D model to be rendered
  • Cache the image result
  • Return the image

Notice the line await page.waitForFunction('window.status === "ready"').

This function waits until rendering is complete. For this to work, we'll need to update our viewer page and add an onLoad method to the GLTFModel component. We'll also add a router to pass a model query parameter to the GLTFModel component:

// pages/index.tsx

import { Canvas } from '@react-three/fiber'
import { Stage } from '@react-three/drei'
import { Suspense } from 'react'
import GLTFModel from '../components/gltf-model'
import { useRouter } from 'next/router'

const handleOnLoaded = () => {
  console.log('Model loaded')
  window.status = 'ready'
}

export default function ViewerPage() {
  const router = useRouter()
  const { model } = router.query
  if (!model) return <>No model provided</>

  return (
    <Canvas gl={{ preserveDrawingBuffer: true, antialias: true, alpha: true }} camera={{ fov: 35 }} shadows>
      <Suspense fallback={null}>
        <Stage contactShadow shadows adjustCamera intensity={1} environment="city" preset="rembrandt">
          <GLTFModel model={model as string} shadows={true} onLoaded={handleOnLoaded} />
        </Stage>
      </Suspense>
    </Canvas>
  )
}
Enter fullscreen mode Exit fullscreen mode

Also, we'll need to update our gltf-model.tsx component with a useEffect hook:

import { useGLTF } from "@react-three/drei";
import { useLayoutEffect, useEffect } from "react";

interface GLTFModelProps {
  model: string;
  shadows: boolean;
  onLoaded: any;
}

export default function GLTFModel(props: GLTFModelProps) {
  const gltf = useGLTF(props.model);

  useLayoutEffect(() => {
    gltf.scene.traverse((obj: any) => {
      if (obj.isMesh) {
        obj.castShadow = obj.receiveShadow = props.shadows;
        obj.material.envMapIntensity = 0.8;
      }
    });
  }, [gltf.scene, props.shadows]);

  useEffect(() => {
    props.onLoaded();
  }, []);

  return <primitive object={gltf.scene} />;
}
Enter fullscreen mode Exit fullscreen mode

Test drive

Let's see if our API is functional.

http://localhost:3000/api/render?model=/DamagedHelmet.glb

Boom ðŸ’Ĩ server-side rendered glTF model:

Server side rendered helmet

Rendering of this 3D model takes ~5 seconds. When deployed to a CDN the image is served in ~50ms after the initial request. Later requests trigger revalidation (re-rendering in the background).

⚡ Caching ⚡

We're taking advantage of the stale-while-revalidate header by setting it in our serverless function.

This way we can serve a resource from the CDN cache while updating the cache in the background. It's useful for cases where content changes frequently but takes significant amount of time to generate (i.e. rendering!).

We set the maxage to 10 seconds. If a request gets repeated within 10 seconds, the previous image is considered to be fresh – a cache HIT is served.

If the request is repeated 10+ seconds later, the image is still immediately served from the cache. In the background, a revalidation request is triggered and an updated image is served for the next request.

Deployment

In this example we're deploying the service to Vercel by running vercel using their CLI.

⚡ Boost the performance of the function ⚡

You can improve the performance of the function by configuring more memory available for it. Boosting the memory upgrades the CPU and network performance of the underlying AWS Lambdas.

Here's how to configure the Lambda to have 3X the memory than default configuration.

touch vercel.json

{
  "functions": {
    "api/render.ts": {
      "maxDuration": 30,
      "memory": 3008
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

The final project and functioning API can be found on GitHub.

Thanks for reading!

This post originally appeared on https://www.rainer.im/blog/serverless-3d-rendering.

Find me elsewhere

Oldest comments (0)