DEV Community

Cover image for Please Do Not Read: Augmented Reality with Expo
Chryen
Chryen

Posted on

Please Do Not Read: Augmented Reality with Expo

Welcome back, coders! We are back with another informational blog on cool technologies that I have been working on. Like I said last week, toy problem time has been postponed for an indefinite amount of time; so instead, we will have tech time. This week's tech is going to be on more Expo features, specifically augmented reality. Augmented reality has been on my bucket list for a while because it is so cool and looks impressive. Today, I will show you how to implement a very basic augmented reality screen using Expo. Just a warning, this will only work on IOS devices. Let's get started!

First and foremost, we will we need to create an Expo project. If you have not checked out my blog last week, I highly recommend following the instructions there on setting up an Expo project. Here is a link to the blog:

https://dev.to/chryen/please-do-not-read-expo-5fbh

After setting up an Expo project, we can start with the installation process. We will need to install a few dependencies for Expo's augmented reality. For a basic augmented reality application, all we need install are these dependencies:

$npm install three expo-three expo-three-ar expo-graphics expo-gl expo-permissions

After successfully installing these dependencies, we can start messing around with App.js. Let's go ahead and import these libraries into App.js. We can even cut some of the template code that they have given us. This is what our imports should look like:

import React, { useEffect } from 'react';
import { AR } from 'expo';
import * as Permissions from 'expo-permissions';
import { Renderer, THREE } from 'expo-three';
import { GraphicsView } from 'expo-graphics';
import { BackgroundTexture, Camera } from 'expo-three-ar';

Before we actually get into setting up the augmented reality scene, we must first ask for permission to use the camera, which is handled conveniently through expo-permissions. We can implement this feature by adding it within a useEffect:

export default function App() {
  // Ask for permission to use camera
  useEffect(() => {
    (async () => {
      const { status } = await Permissions.askAsync(Permissions.CAMERA);
      if (status !== 'granted') {
        console.log('no access to camera');
      }
      // Turn off extra warnings
      THREE.suppressExpoWarnings(true);
    })();
  }, []);

We can now create our augmented reality scene. Augmented reality will be created within an onContextCreate function that will hold the scene, lighting, and even the 3D model. I will go ahead and share the code with some pseudo-code for you to follow along:

// Create Augmented Reality with 3D model and camera
  const onContextCreate = async ({ gl, scale: pixelRatio, width, height }) => {
    // This will allow ARKit to collect Horizontal surfaces
    AR.setPlaneDetection(AR.PlaneDetectionTypes.Horizontal);
    renderer = new Renderer({ gl, pixelRatio, width, height });
    renderer.gammaInput = true;
    renderer.gammaOutput = true;
    renderer.shadowMap.enabled = true;

    // Create scene to capture model and background
    scene = new THREE.Scene();
    scene.background = new BackgroundTexture(renderer);
    camera = new Camera(width, height, 0.01, 1000);

    // Create lighting for 3D model
    scene.add(new THREE.AmbientLight( 0x404040 ));
    let light = new THREE.DirectionalLight( 0xffffff, 0.5 );
    light.position.set( 3, 3, 3 )
    scene.add(light);

    // Create simple cube 3D model
    const geometry = new THREE.BoxGeometry(0.1, 0.1, 0.1);
    const material = new THREE.MeshPhongMaterial({
      color: 0xff00ff,
    });
    let cube = new THREE.Mesh(geometry, material);
    cube.position.z = -0.4;
    scene.add(cube);
  }

After setting up the scene, we can implement an onResize function that will adjust the size and projection of the camera:

// When the phone rotates, or the view changes size, this method will be called.
  const onResize = ({ x, y, scale, width, height }) => {
    if (!renderer) {
      return;
    }
    camera.aspect = width / height;
    camera.updateProjectionMatrix();
    renderer.setPixelRatio(scale);
    renderer.setSize(width, height);
  };

And finally for our last function, we need a function that will render the scene along with the camera from onContextCreate.

// Called every frame.
  const onRender = () => {
    renderer.render(scene, camera);
  };

After that, go ahead and implement your return statement, where you will render a GraphicsView containing the functions above and some other parameters. This will close off the entirety of App.js:

return (
    (<GraphicsView
      style={{ flex: 1 }}
      onContextCreate={onContextCreate}
      onRender={onRender}
      onResize={onResize}
      isArEnabled
      isArRunningStateEnabled
      isArCameraStateEnabled
      arTrackingConfiguration={'ARWorldTrackingConfiguration'}
      />)
  );
} // end of App.js

Go ahead and npm start your application. Get your iPhone out, and if all goes well, it should look a little something like this(sorry, can only give link because mp4):

https://i.gyazo.com/0fd63518e294cfa1fd088bae2463c504.mp4

Congrats! You have just implemented a very basic augmented reality application! If you have any more questions or need clarification, feel free to ask or check out the Expo docs. Thanks for reading, and I'll see you all next week!

Top comments (1)

Collapse
 
dris profile image
Ajibola Adekanmbi

I'd love to lear more about this