DEV Community

Cover image for Controlling YouTube 360 videos handsfree with Handsfree.js
Oz Ramos
Oz Ramos

Posted on • Updated on

Controlling YouTube 360 videos handsfree with Handsfree.js

Liquid error: internal

If you've ever tried watching a 360 video on YouTube on your desktop with a mouse or trackpad, you'll immediately have been frustrated by how awkward it is to control the camera. The reason is that they're really meant to be viewed through a VR headset with the aid of your phone's gyroscopic sensors.

In this tutorial we'll explore a handsfree alternative that lets you control the POV with your head, using the rotation properties exposed by the Weboji model through Handsfree.js. If you haven't already, check out my introduction to Handsfree.js to help you get started quickly.

Setting up the YouTube IFrame API

Fortunately, YouTube doesn't require an API key to get this started. Simply add the Handsfree.js dependencies along with the YouTube API library:

    <!-- Handsfree dependencies -->
    <link rel="stylesheet" href="https://unpkg.com/handsfree@6.1.4/dist/handsfreejs/handsfree.css" />
    <script src="https://unpkg.com/handsfree@6.1.4/dist/handsfreejs/handsfree.js"></script>

    <!-- YouTube dependencies, let's defer it so that our other code runs first -->
    <script defer src="https://www.youtube.com/iframe_api"></script>
Enter fullscreen mode Exit fullscreen mode

The YouTube API will look for a onYouTubeIframeAPIReady method once it's loaded, so let's add our YouTube video logic in there. We'll also add a div#player element to transform into a video:

    <!-- We'll transform this element -->
    <div id="player"></div>

    <script>
    let YTPlayer

    function onYouTubeIframeAPIReady () {
      // Instantiate a new Player, selecting '#player' as the element to turn into a video
      YTPlayer = new YT.Player('player', {
        // I picked this video ID because it has no ads
        // @see https://www.youtube.com/watch?v=Crv1hbk9HuM&list=PLxf-CDjxvNVoxF27Pj3VV5pIy4SsqPNcI&index=5&t=0s
        videoId: 'Crv1hbk9HuM',
        // Lets autoplay it
        playerVars: { autoplay: 1 }    
      })
    }
    </script>
Enter fullscreen mode Exit fullscreen mode

And that's it, if you run the above you should see a 360 video being played automatically!

Adding Handsfree controls

The next step is to add head controls with Handsfree.js. First we'll need to instantiate an instance of Handsfree. Then we'll create a plugin called youtube360 that simply maps the YouTube's camera with the users head pose:

    // Create one instance and autostart tracking
    const handsfree = new Handsfree()

    // Create a plugin called "youtube360" that runs on every webcam frame for all instances
    Handsfree.use('youtube360', ({head}) => {
      // Exit if the YouTube API is still loading
      if (!YTPlayer || !document.contains(YTPlayer.a)) return

      // Map the cameras POV with the users
      // - Because Handsfree returns radians, we'll also convert them
      // - We then multiply by some number so that you don't have to literally tilt your head 45 degrees πŸ˜…
      YTPlayer.getSphericalProperties && YTPlayer.setSphericalProperties({
        pitch: ((-head.rotation[0] * 180) / Math.PI) * 8 + 90,
        yaw: ((-head.rotation[1] * 180) / Math.PI) * 10,
        roll: ((head.rotation[2] * 180) / Math.PI) * 2
      })
    })
Enter fullscreen mode Exit fullscreen mode

Finally, let's add a start button. It's good etiquette to always ask the user to start the webcam!

    <button onclick="handsfree.start()"></button>
Enter fullscreen mode Exit fullscreen mode

And that's all there is to it!

You'll notice that the camera is quite jittery, visit this tutorials Glitch to see how I tween the values to make it more smooth. One day I plan to automatically tween the values for you!

A recap of what we've learned so far

In this tutorial, you learned how to determine the users head pose, which we used to control the POV settings of the playing YouTube video.

handsfree.use('demo', ({head}) => {
  // Returns [pitch, yaw, roll]
  console.log(head.rotation)
})
Enter fullscreen mode Exit fullscreen mode

Combined with what we learned in the last tutorial, you should now be able to determine where on the screen the user is pointed at with the instance.pointer object as well as how the head is pointing at it with instance.head.rotation.

Thanks for reading

We still have a few properties to learn, including determining the users position in 3D space as well as determining their faces morph values (like, how smiley they are or how high/low their eyebrows are). Until then, here are some links to help you take things further:

Thanks for reading and have fun coding πŸ‘‹

Updates

  • 11/23 - Updated to reflect new v6 API

Top comments (0)