DEV Community

Cover image for Streaming Camera with C++ WebRTC GStreamer
Ethan
Ethan

Posted on • Originally published at ethan-dev.com

Streaming Camera with C++ WebRTC GStreamer

Introduction

Hello! ๐Ÿ˜Ž

In this advanced WebRTC tutorial I will show you how to stream your camera to a HTML page using WebRTC, GStreamer and C++. We will be using boost to handle the signaling. By the end of this tutorial you should have a simple understanding on WebRTC GStreamer. ๐Ÿ‘€


Requirements

  • GStreamer and its development libraries
  • Boost libraries
  • CMake for building the project
  • A C++ compiler
  • Basic C++ Knowledge

Creating the Project

First we need a place to house our projects files, create a new directory like so:

mkdir webrtc-stream && cd webrtc-stream
Enter fullscreen mode Exit fullscreen mode

First we need to create a build file in order to build the completed project, create a new file called "CMakeLists.txt" and populate it with the following:

cmake_minimum_required(VERSION 3.10)

# Set the project name and version
project(webrtc_server VERSION 1.0)

# Specify the C++ standard
set(CMAKE_CXX_STANDARD 14)
set(CMAKE_CXX_STANDARD_REQUIRED True)

# Find required packages
find_package(PkgConfig REQUIRED)
pkg_check_modules(GST REQUIRED gstreamer-1.0 gstreamer-webrtc-1.0 gstreamer-sdp-1.0)

find_package(Boost 1.65 REQUIRED COMPONENTS system filesystem json)

# Include directories
include_directories(${GST_INCLUDE_DIRS} ${Boost_INCLUDE_DIRS})

# Add the executable
add_executable(webrtc_server main.cpp)

# Link libraries
target_link_libraries(webrtc_server ${GST_LIBRARIES} Boost::system Boost::filesystem Boost::json)

# Set properties
set_target_properties(webrtc_server PROPERTIES
    CXX_STANDARD 14
    CXX_STANDARD_REQUIRED ON
)

# Specify additional directories for the linker
link_directories(${GST_LIBRARY_DIRS})

# Print project info
message(STATUS "Project: ${PROJECT_NAME}")
message(STATUS "Version: ${PROJECT_VERSION}")
message(STATUS "C++ Standard: ${CMAKE_CXX_STANDARD}")
message(STATUS "Boost Libraries: ${Boost_LIBRARIES}")
message(STATUS "GStreamer Libraries: ${GST_LIBRARIES}")
Enter fullscreen mode Exit fullscreen mode

The above links all the required libraries together in order to build the code into an executable that can be executed.

Now we can get on to coding the project. ๐Ÿฅธ


Coding the Project

Now we can start coding the source code for the project, create a new file called "main.cpp", we will start by importing the necessary headers for GStreamer, WebRTC, Boost and standard libraries:

#define GST_USE_UNSTABLE_API
#include <gst/gst.h>
#include <gst/webrtc/webrtc.h>
#include <boost/beast.hpp>
#include <boost/asio.hpp>
#include <boost/json.hpp>
#include <iostream>
#include <thread>

namespace beast = boost::beast;
namespace http = beast::http;
namespace websocket = beast::websocket;
namespace net = boost::asio;
using tcp = net::ip::tcp;
using namespace boost::json;
Enter fullscreen mode Exit fullscreen mode

Next we will be define constants that will be used later, mainly the STUN server and port that the server will listen on:

#define STUN_SERVER "stun://stun.l.google.com:19302"
#define SERVER_PORT 8000
Enter fullscreen mode Exit fullscreen mode

Now we will declare global variables for the GStreamer main loop and pipeline elements:

GMainLoop *loop;
GstElement *pipeline, *webrtcbin;
Enter fullscreen mode Exit fullscreen mode

Next we will create the functions to handle each of the events. First one being a function that sends ICE candidates to the WebSocket client:

void send_ice_candidate_message(websocket::stream<tcp::socket>& ws, guint mlineindex, gchar *candidate)
{
  std::cout << "Sending ICE candidate: mlineindex=" << mlineindex << ", candidate=" << candidate << std::endl;

  object ice_json;
  ice_json["candidate"] = candidate;
  ice_json["sdpMLineIndex"] = mlineindex;

  object msg_json;
  msg_json["type"] = "candidate";
  msg_json["ice"] = ice_json;

  std::string text = serialize(msg_json);
  ws.write(net::buffer(text));

  std::cout << "ICE candidate sent" << std::endl;
}
Enter fullscreen mode Exit fullscreen mode

The next "on_answer_created" function handles the creation of a WebRTC answer and sends it back to the client:

void on_answer_created(GstPromise *promise, gpointer user_data)
{
  std::cout << "Answer created" << std::endl;

  websocket::stream<tcp::socket>* ws = static_cast<websocket::stream<tcp::socket>*>(user_data);
  GstWebRTCSessionDescription *answer = NULL;
  const GstStructure *reply = gst_promise_get_reply(promise);
  gst_structure_get(reply, "answer", GST_TYPE_WEBRTC_SESSION_DESCRIPTION, &answer, NULL);
  GstPromise *local_promise = gst_promise_new();
  g_signal_emit_by_name(webrtcbin, "set-local-description", answer, local_promise);

  object sdp_json;
  sdp_json["type"] = "answer";
  sdp_json["sdp"] = gst_sdp_message_as_text(answer->sdp);
  std::string text = serialize(sdp_json);
  ws->write(net::buffer(text));

  std::cout << "Local description set and answer sent: " << text << std::endl;

  gst_webrtc_session_description_free(answer);
}
Enter fullscreen mode Exit fullscreen mode

The next function is just a placeholder for handling negotiation events, this event is not needed in this example:

void on_negotiation_needed(GstElement *webrtc, gpointer user_data)
{
  std::cout << "Negotiation needed" << std::endl;
}
Enter fullscreen mode Exit fullscreen mode

The "on_set_remote_description" function sets the remote description and creates an answer:

void on_set_remote_description(GstPromise *promise, gpointer user_data)
{
  std::cout << "Remote description set, creating answer" << std::endl;

  websocket::stream<tcp::socket>* ws = static_cast<websocket::stream<tcp::socket>*>(user_data);
  GstPromise *answer_promise = gst_promise_new_with_change_func(on_answer_created, ws, NULL);

  g_signal_emit_by_name(webrtcbin, "create-answer", NULL, answer_promise);
}
Enter fullscreen mode Exit fullscreen mode

The "on_ice_candidate" function handles ICE candidate events and sends them to the WebSocket client:

void on_ice_candidate(GstElement *webrtc, guint mlineindex, gchar *candidate, gpointer user_data)
{
  std::cout << "ICE candidate generated: mlineindex=" << mlineindex << ", candidate=" << candidate << std::endl;

  websocket::stream<tcp::socket>* ws = static_cast<websocket::stream<tcp::socket>*>(user_data);
  send_ice_candidate_message(*ws, mlineindex, candidate);
}
Enter fullscreen mode Exit fullscreen mode

The "handle_websocket_session" function manages the WebSocket connection, setting up the GStreamer pipeline and handling both SDP and ICE messages:

void handle_websocket_session(tcp::socket socket)
{
  try
  {
    websocket::stream<tcp::socket> ws{std::move(socket)};
    ws.accept();

    std::cout << "WebSocket connection accepted" << std::endl;

    GstStateChangeReturn ret;
    GError *error = NULL;

    pipeline = gst_pipeline_new("pipeline");
    GstElement *v4l2src = gst_element_factory_make("v4l2src", "source");
    GstElement *videoconvert = gst_element_factory_make("videoconvert", "convert");
    GstElement *queue = gst_element_factory_make("queue", "queue");
    GstElement *vp8enc = gst_element_factory_make("vp8enc", "encoder");
    GstElement *rtpvp8pay = gst_element_factory_make("rtpvp8pay", "pay");
    webrtcbin = gst_element_factory_make("webrtcbin", "sendrecv");

    if (!pipeline || !v4l2src || !videoconvert || !queue || !vp8enc || !rtpvp8pay || !webrtcbin)
    {
      g_printerr("Not all elements could be created.\n");
      return;
    }

    g_object_set(v4l2src, "device", "/dev/video0", NULL);
    g_object_set(vp8enc, "deadline", 1, NULL);

    gst_bin_add_many(GST_BIN(pipeline), v4l2src, videoconvert, queue, vp8enc, rtpvp8pay, webrtcbin, NULL);

    if (!gst_element_link_many(v4l2src, videoconvert, queue, vp8enc, rtpvp8pay, NULL))
    {
      g_printerr("Elements could not be linked.\n");
      gst_object_unref(pipeline);
      return;
    }

    GstPad *rtp_src_pad = gst_element_get_static_pad(rtpvp8pay, "src");
    GstPad *webrtc_sink_pad = gst_element_get_request_pad(webrtcbin, "sink_%u");
    gst_pad_link(rtp_src_pad, webrtc_sink_pad);
    gst_object_unref(rtp_src_pad);
    gst_object_unref(webrtc_sink_pad);

    g_signal_connect(webrtcbin, "on-negotiation-needed", G_CALLBACK(on_negotiation_needed), &ws);
    g_signal_connect(webrtcbin, "on-ice-candidate", G_CALLBACK(on_ice_candidate), &ws);

    ret = gst_element_set_state(pipeline, GST_STATE_PLAYING);

    if (ret == GST_STATE_CHANGE_FAILURE)
    {
      g_printerr("Unable to set the pipeline to the playing state.\n");
      gst_object_unref(pipeline);
      return;
    }

    std::cout << "GStreamer pipeline set to playing" << std::endl;

    for (;;)
    {
      beast::flat_buffer buffer;
      ws.read(buffer);

      auto text = beast::buffers_to_string(buffer.data());
      value jv = parse(text);
      object obj = jv.as_object();
      std::string type = obj["type"].as_string().c_str();

      if (type == "offer")
      {
        std::cout << "Received offer: " << text << std::endl;

        std::string sdp = obj["sdp"].as_string().c_str();

        GstSDPMessage *sdp_message;
        gst_sdp_message_new_from_text(sdp.c_str(), &sdp_message);
        GstWebRTCSessionDescription *offer = gst_webrtc_session_description_new(GST_WEBRTC_SDP_TYPE_OFFER, sdp_message);
        GstPromise *promise = gst_promise_new_with_change_func(on_set_remote_description, &ws, NULL);
        g_signal_emit_by_name(webrtcbin, "set-remote-description", offer, promise);
        gst_webrtc_session_description_free(offer);

        std::cout << "Setting remote description" << std::endl;
      }
      else if (type == "candidate")
      {
        std::cout << "Received ICE candidate: " << text << std::endl;

        object ice = obj["ice"].as_object();
        std::string candidate = ice["candidate"].as_string().c_str();
        guint sdpMLineIndex = ice["sdpMLineIndex"].as_int64();
        g_signal_emit_by_name(webrtcbin, "add-ice-candidate", sdpMLineIndex, candidate.c_str());

        std::cout << "Added ICE candidate" << std::endl;
      }
    }
  }
  catch (beast::system_error const& se)
  {
    if (se.code() != websocket::error::closed)
    {
      std::cerr << "Error: " << se.code().message() << std::endl;
    }
  }
  catch (std::exception const& e)
  {
    std::cerr << "Exception: " << e.what() << std::endl;
  }
}
Enter fullscreen mode Exit fullscreen mode

The next "start_server" function initializes the server, sccepting TCP connections and spawning new threads to handle each connection:

void start_server()
{
  try
  {
    net::io_context ioc{1};
    tcp::acceptor acceptor{ioc, tcp::endpoint{tcp::v4(), SERVER_PORT}};

    for (;;)
    {
      tcp::socket socket{ioc};
      acceptor.accept(socket);
      std::cout << "Accepted new TCP connection" << std::endl;
      std::thread{handle_websocket_session, std::move(socket)}.detach();
    }
  }
  catch (std::exception const& e)
  {
    std::cerr << "Exception: " << e.what() << std::endl;
  }
}
Enter fullscreen mode Exit fullscreen mode

Finally we just need to create the final main function to initialize GStreamer, start the server and run the main loop:

int main(int argc, char *argv[])
{
  gst_init(&argc, &argv);
  loop = g_main_loop_new(NULL, FALSE);

  std::cout << "Starting WebRTC server" << std::endl;

  std::thread server_thread(start_server);
  g_main_loop_run(loop);

  server_thread.join();

  gst_element_set_state(pipeline, GST_STATE_NULL);
  gst_object_unref(pipeline);
  g_main_loop_unref(loop);

  std::cout << "WebRTC server stopped" << std::endl;

  return 0;
}
Enter fullscreen mode Exit fullscreen mode

Done, now we can finally build the project! ๐Ÿ˜„


Building the Project

To build the above source code into an executable first create a new directory called build:

mkdir build && cd build
Enter fullscreen mode Exit fullscreen mode

Build the project:

cmake ..
make
Enter fullscreen mode Exit fullscreen mode

If all goes well the project should be built successfully and you should have an executable.

Next we need to create a page to view the stream. ๐Ÿ˜ธ


Creating the Frontend

Create a new directory called "public" and in it create a new html file called "index.html" and populate it with the following code:

<!DOCTYPE html>
<html>
<head>
  <title>WebRTC Stream</title>
</head>
<body>
  <video id="video" autoplay playsinline muted></video>
  <script>
    const video = document.getElementById('video');
    const signaling = new WebSocket('ws://localhost:8000/ws');
    let pc = new RTCPeerConnection({
      iceServers: [{urls: 'stun:stun.l.google.com:19302'}]
    });

    signaling.onmessage = async (event) => {
      const data = JSON.parse(event.data);
      console.log('Received signaling message:', data);

      if (data.type === 'answer') {
        console.log('Setting remote description with answer');
        await pc.setRemoteDescription(new RTCSessionDescription(data));
      } else if (data.type === 'candidate') {
        console.log('Adding ICE candidate:', data.ice);
        await pc.addIceCandidate(new RTCIceCandidate(data.ice));
      }
    };

    pc.onicecandidate = (event) => {
      if (event.candidate) {
        console.log('Sending ICE candidate:', event.candidate);
        signaling.send(JSON.stringify({
          type: 'candidate',
          ice: event.candidate
        }));
      }
    };

    pc.ontrack = (event) => {
      console.log('Received track:', event);
      if (event.track.kind === 'video') {
        console.log('Attaching video track to video element');
        video.srcObject = event.streams[0];
        video.play().catch(error => {
          console.error('Error playing video:', error);
        });

        video.load();
      }
    };

    pc.oniceconnectionstatechange = () => {
      console.log('ICE connection state:', pc.iceConnectionState);
    };

    pc.onicegatheringstatechange = () => {
      console.log('ICE gathering state:', pc.iceGatheringState);
    };

    pc.onsignalingstatechange = () => {
      console.log('Signaling state:', pc.signalingState);
    };

    async function start() {
      pc.addTransceiver('video', {direction: 'recvonly'});
      const offer = await pc.createOffer();
      console.log('Created offer:', offer);
      await pc.setLocalDescription(offer);
      console.log('Set local description with offer');
      signaling.send(JSON.stringify({type: 'offer', sdp: pc.localDescription.sdp}));
    }

    start();
  </script>
</body>
</html>
Enter fullscreen mode Exit fullscreen mode

The above can be explained in my other WebRTC tutorials, but it simple communicates with the signalling server and when a remote stream is received plays the video in the video HTML element.

Done now we can actually run the project! ๐Ÿ‘


Running the Project

To run the project simple execute the following command:

./webrtc_server
Enter fullscreen mode Exit fullscreen mode

To run the html page we will use a python module:

python3 -m http.server 9999
Enter fullscreen mode Exit fullscreen mode

Navigate your browser to http://localhost:9999 and on load you should see your camera showing in the video element like so:

Image of camera stream

Done! ๐Ÿ˜


Considerations

In order to improve the above, I would like to implement the following:

  • Handle multiple viewers
  • Handle receiving a stream from HTML
  • Creating an SFU
  • Recording

Conclusion

In this tutorial I have shown you how to stream your camera using native C++, GStreamer and view the stream in a HTML page. I hope this tutorial has taught you something, I certainly had a lot of fun creating it.

As always you can find the source code for the project on my Github:
https://github.com/ethand91/webrtc-gstreamer

Happy Coding! ๐Ÿ˜Ž


Like my work? I post about a variety of topics, if you would like to see more please like and follow me.
Also I love coffee.

โ€œBuy Me A Coffeeโ€

If you are looking to learn Algorithm Patterns to ace the coding interview I recommend the [following course](https://algolab.so/p/algorithms-and-data-structure-video-course?affcode=1413380_bzrepgch

Top comments (0)