DEV Community

Cover image for Build Video/Chat App with AWS Websocket, WebRTC, and Vue Final Part
Kevin Odongo
Kevin Odongo

Posted on

Build Video/Chat App with AWS Websocket, WebRTC, and Vue Final Part

In our final part, we want to add a new AWS Service called Kinesis Video Stream to the application.

Brief Explanation

Kinesis Video Streams

Kinesis Video Streams also supports WebRTC, an open-source project that enables real-time media streaming and interaction between web browsers, mobile applications, and connected devices via simple APIs. Typical uses include video chat and peer-to-peer media streaming.

The benefits of Kinesis Video stream

  • Stream video from millions of devices
  • Build a real-time vision and video-enabled apps
  • Playback live and recorded video streams
  • Build apps with two-way, real-time media streaming
  • Secure
  • Durable, searchable storage
  • No infrastructure to manage

Alt Text

This is the artichect we want to add to our application.

WebRTC

WebRTC is an open technology specification for enabling real-time communication (RTC) across browsers and mobile applications via simple APIs. It uses peering techniques for real-time data exchange between connected peers and provides low latency media streaming required for human-to-human interaction. The WebRTC specification includes a set of IETF protocols including Interactive Connectivity Establishment, Traversal Using Relay around NAT (TURN), and Session Traversal Utilities for NAT (STUN) for establishing peer-to-peer connectivity, in addition to protocol specifications for reliable and secure real-time media and data streaming.

Amazon Kinesis Video Streams with WebRTC Concepts

The following are key terms and concepts specific to the Amazon Kinesis Video Streams with WebRTC.

Signaling channel

A resource that enables applications to discover, set up, control, and terminate a peer-to-peer connection by exchanging signaling messages.

Peer

Any device or application (for example, a mobile or web application, webcam, home security camera, baby monitor, etc.) that is configured for real-time, two-way streaming through a Kinesis Video Streams with WebRTC.

Master

A peer that initiates the connection and is connected to the signaling channel with the ability to discover and exchange media with any of the signaling channel's connected viewers.

Currently, a signaling channel can only have one master.

Viewer

A peer that is connected to the signaling channel with the ability to discover and exchange media only with the signaling channel's master.

Other concepts that your need to know

Session Traversal Utilities for NAT (STUN)

A protocol that is used to discover your public address and determine any restrictions in your router that would prevent a direct connection with a peer.

Traversal Using Relays around NAT (TURN)

A server that is used to bypass the Symmetric NAT restriction by opening a connection with a TURN server and relaying all information through that server.

Session Description Protocol (SDP)

A standard for describing the multimedia content of the connection such as resolution, formats, codecs, encryption, etc. so that both peers can understand each other once the data is transferring.

SDP Offer

An SDP message sent by an agent that generates a session description in order to create or modify a session. It describes the aspects of desired media communication.

SDP Answer

An SDP message sent by an answerer in response to an offer received from an offerer. The answer indicates the aspects that are accepted. For example, if all the audio and video streams in the offer are accepted.

Interactive Connectivity Establishment (ICE)

A framework that allows your web browser to connect with peers.

ICE Candidate

A method that the sending peer is able to use to communicate.

STEPS

When a connection will be initiated in your application here are the steps that will be taking place.

  • Master (User A) will create a meeting and choose a video. When the video button will be clicked the Master (User A) will make an SDP offer that contains information about the session Master (User A) application wants to establish, including what codecs to use, whether this is an audio or video session, etc. It also contains a list of ICE candidates, which are the IP and port pairs that the Viewer (User B) application can attempt to use to connect to the Master (User A)

  • Now here is where our WebSocket will come in handy Master (User A) will have to share the SDP offer to Viewer (User B) so that they can generate an SDP Answer that contains information about the session application.

  • After Master (User A) and Viewer (User B) have exchanged SDPs, they then perform a series of connectivity checks. The ICE algorithm in each application takes a candidate IP/port pair from the list it received in the other party's SDP and sends it a STUN request. If a response comes back from the other application, the originating application considers the check successful and marks that IP/port pair as a valid ICE candidate.

  • After connectivity checks are finished on all of the IP/port pairs, the applications negotiate and decide to use one of the remaining, valid pairs. When a pair is selected, media begins flowing between the application.

Limits

Kinesis Video Stream has a limit of 1000 for CreateSignalingChannel API.

Other limits to take note

ConnectAsMaster

  • API - 3 TPS per channel (hard)

  • Maximum number of master connections per signaling channel - 1 (hard)

  • Connection duration limit - 1 hour (hard)

  • Idle connection timeout - 10 minutes (hard)

When a client receives the GO_AWAY message from the server, the connection is terminated after a grace period of 1 minute (hard)

ConnectAsViewer

  • API - 3 TPS per channel (hard)

  • Maximum number of viewer connections per channel - 10 (soft)

  • Connection duration limit - 1 hour (hard)

  • Idle connection timeout - 10 minutes (hard)

  • Once a client receives the GO_AWAY message from the server, the connection is terminated after a grace period of 1 minute (hard)

SendSDPOffer

  • API: 5 TPS per WebSocket connection (hard)

  • Message payload size limit - 10k (hard)

SendSDPAnswer

  • API: 5 TPS per WebSocket connection (hard)

  • Message payload size limit - 10k (hard)

SendICECandidate

  • API: 20 TPS per WebSocket connection (hard)

  • Message payload size limit - 10k (hard)

SendAlexaOffertoMaster

  • API: 5 TPS per signaling channel (hard)

GetIceServerConfig

  • API: 5 TPS per signaling channel (hard)

Disconnect

  • N/A

Pricing

Kinesis Video Streams pricing

Service Price
Data Ingested into Kinesis Video Streams (per GB data ingested) $0.00850
Data Consumed from Kinesis Video Streams (per GB data egressed) $0.00850
Data Consumed from Kinesis Video Streams using HLS (per GB data egressed) $0.01190
Data Stored in Kinesis Video Streams (per GB-Month data stored) $0.02300

WebRTC pricing

Service Price
Active signaling channels (per channel per month) $0.03000
Signaling messages (per million) $2.25000
TURN Streaming minutes (per thousand) $0.12000

SUMMARY

Before we go to writing our code AWS Kinesis does not have a free tier. This is a paid AWS product you will need to be careful and always clear what you are not using.

Your application should be able to build a channel and destroy a channel when the Master (User A) closes his meeting. This will ensure minimal cost for your application.

Using AWS Kinesis video stream is not ideal for Application that requires large viewers. It has a soft limit of 10 viewers (This limit can be increased on request).

If you are building a video conference app for a small team, schools, or church meetings, etc.

Configuration



yarn add amazon-kinesis-video-streams-webrtc


Enter fullscreen mode Exit fullscreen mode

Update script.js file



const SignalingClient = require('amazon-kinesis-video-streams-webrtc').SignalingClient;


Enter fullscreen mode Exit fullscreen mode

STEP ONE

Let us begin by having a way to create a Kinesis Video Stream channel. Remember to ensure you control your AWS billing always close what you are not consuming. Let us add two functions in our script.js file.



// master user create a signal function
export const createsignal = async event => {
  // 1. Create a signal channel
  const createSignalingChannelResponse = await kinesisVideoClient
    .createSignalingChannel({
      ChannelName: `${event}` /* required */,
      ChannelType: "SINGLE_MASTER"
    })
    .promise();
  console.log("[MASTER] Channel Name: ", createSignalingChannelResponse);
  return createSignalingChannelResponse;
};

// delete a channel by master user
export const deletechannel = async event => {
  // 2. Get signaling channel ARN
  const describeSignalingChannelResponse = await kinesisVideoClient
    .describeSignalingChannel({
      ChannelName: `${event}`
    })
    .promise();
  const channelARN = describeSignalingChannelResponse.ChannelInfo.ChannelARN;
  console.log("[MASTER] Channel ARN: ", channelARN);
  // delete channel
  await kinesisVideoClient
    .deleteSignalingChannel({
      ChannelARN: `${channelARN}`
    })
    .promise();
};


Enter fullscreen mode Exit fullscreen mode

We now have a complete way to start our sessions and a complete way to clean our AWS environment.

STEP TWO

In our brief explanation above both users have to generate ICE Candidate then make an offer and answer and all these should be shared before the video exchange begin.

Here are two functions that will handle the duty of the Master and Viewer.



// eslint-disable-next-line no-unused-vars
export const generateiceserversformaster = async (event, master) => {
  // 2. Get signaling channel ARN
  const describeSignalingChannelResponse = await kinesisVideoClient
    .describeSignalingChannel({
      ChannelName: `${event}`
    })
    .promise();
  const channelARN = describeSignalingChannelResponse.ChannelInfo.ChannelARN;
  console.log("[MASTER] Channel ARN: ", channelARN);

  // 3. Get signaling channel endpoints
  const getSignalingChannelEndpointResponse = await kinesisVideoClient
    .getSignalingChannelEndpoint({
      ChannelARN: channelARN,
      SingleMasterChannelEndpointConfiguration: {
        Protocols: ["WSS", "HTTPS"],
        Role: "MASTER"
      }
    })
    .promise();
  const endpointsByProtocol = getSignalingChannelEndpointResponse.ResourceEndpointList.reduce(
    (endpoints, endpoint) => {
      endpoints[endpoint.Protocol] = endpoint.ResourceEndpoint;
      return endpoints;
    },
    {}
  );
  console.log("[MASTER] Endpoints: ", endpointsByProtocol);

  // 4. Create Signaling Client
  //window.KVSWebRTC.SignalingClient
  master.signalingClient = new SignalingClient({
    channelARN,
    channelEndpoint: endpointsByProtocol.WSS,
    role: "MASTER",
    region: process.env.VUE_APP_MY_REGION,
    credentials: {
      accessKeyId: process.env.VUE_APP_ACCESS_KEY_ID,
      secretAccessKey: process.env.VUE_APP_SECRET_ACCESS_KEY
    },
    systemClockOffset: kinesisVideoClient.config.systemClockOffset
  });

  // Get ICE server configuration
  const kinesisVideoSignalingChannelsClient = new AWS.KinesisVideoSignalingChannels(
    {
      region: process.env.VUE_APP_MY_REGION,
      accessKeyId: process.env.VUE_APP_ACCESS_KEY_ID,
      secretAccessKey: process.env.VUE_APP_SECRET_ACCESS_KEY,
      endpoint: endpointsByProtocol.HTTPS,
      correctClockSkew: true
    }
  );
  const getIceServerConfigResponse = await kinesisVideoSignalingChannelsClient
    .getIceServerConfig({
      ChannelARN: channelARN
    })
    .promise();
  const iceServers = [];
  // use either
  iceServers.push({
    urls: `stun:stun.kinesisvideo.${process.env.VUE_APP_MY_REGION}.amazonaws.com:443`
  });
  // OR
  getIceServerConfigResponse.IceServerList.forEach(iceServer =>
    iceServers.push({
      urls: iceServer.Uris,
      username: iceServer.Username,
      credential: iceServer.Password
    })
  );
  console.log("[MASTER] ICE servers: ", iceServers);
  return iceServers;
};

export const generateiceserversforviewer = async (event, viewer) => {
  // Get signaling channel ARN
  const describeSignalingChannelResponse = await kinesisVideoClient
    .describeSignalingChannel({
      ChannelName: event // channel name
    })
    .promise();
  const channelARN = describeSignalingChannelResponse.ChannelInfo.ChannelARN;
  console.log("[VIEWER] Channel ARN: ", channelARN);

  // Get signaling channel endpoints
  const getSignalingChannelEndpointResponse = await kinesisVideoClient
    .getSignalingChannelEndpoint({
      ChannelARN: channelARN,
      SingleMasterChannelEndpointConfiguration: {
        Protocols: ["WSS", "HTTPS"],
        Role: "VIEWER"
      }
    })
    .promise();
  const endpointsByProtocol = getSignalingChannelEndpointResponse.ResourceEndpointList.reduce(
    (endpoints, endpoint) => {
      endpoints[endpoint.Protocol] = endpoint.ResourceEndpoint;
      return endpoints;
    },
    {}
  );
  console.log("[VIEWER] Endpoints: ", endpointsByProtocol);

  const kinesisVideoSignalingChannelsClient = new AWS.KinesisVideoSignalingChannels(
    {
      region: process.env.VUE_APP_MY_REGION,
      accessKeyId: process.env.VUE_APP_ACCESS_KEY_ID,
      secretAccessKey: process.env.VUE_APP_SECRET_ACCESS_KEY,
      endpoint: endpointsByProtocol.HTTPS,
      correctClockSkew: true
    }
  );

  // Get ICE server configuration
  const getIceServerConfigResponse = await kinesisVideoSignalingChannelsClient
    .getIceServerConfig({
      ChannelARN: channelARN
    })
    .promise();
  const iceServers = [];
  iceServers.push({
    urls: `stun:stun.kinesisvideo.${process.env.VUE_APP_MY_REGION}.amazonaws.com:443`
  });
  getIceServerConfigResponse.IceServerList.forEach(iceServer =>
    iceServers.push({
      urls: iceServer.Uris,
      username: iceServer.Username,
      credential: iceServer.Password
    })
  );
  console.log("[VIEWER] ICE servers: ", iceServers);

  // Create Signaling Client
  viewer.signalingClient = new SignalingClient({
    channelARN,
    channelEndpoint: endpointsByProtocol.WSS,
    clientId: viewer.connection_id,
    role: "VIEWER",
    region: process.env.VUE_APP_MY_REGION,
    credentials: {
      region: process.env.VUE_APP_MY_REGION,
      accessKeyId: process.env.VUE_APP_ACCESS_KEY_ID,
      secretAccessKey: process.env.VUE_APP_SECRET_ACCESS_KEY
    },
    systemClockOffset: kinesisVideoClient.config.systemClockOffset
  });
  return iceServers;
};


Enter fullscreen mode Exit fullscreen mode

Finally, on our script.js file, we need a means of stopping the sessions of each user.



// stop viewer
export const stopviewer = viewer => {
  console.log("[VIEWER] Stopping viewer connection");
  if (viewer.signalingClient) {
    viewer.signalingClient.close();
    viewer.signalingClient = null;
  }

  if (viewer.peerConnection) {
    viewer.peerConnection.close();
    viewer.peerConnection = null;
  }

  if (viewer.localStream) {
    viewer.localStream.getTracks().forEach(track => track.stop());
    viewer.localStream = null;
  }

  if (viewer.remoteStream) {
    viewer.remoteStream.getTracks().forEach(track => track.stop());
    viewer.remoteStream = null;
  }

  if (viewer.localView) {
    viewer.localView.srcObject = null;
  }

  if (viewer.remoteView) {
    viewer.remoteView.srcObject = null;
  }
};

// stop master
export const stopmaster = master => {
  console.log("[VIEWER] Stopping viewer connection");
  if (master.signalingClient) {
    master.signalingClient.close();
    master.signalingClient = null;
  }

  if (master.peerConnection) {
    master.peerConnection.close();
    master.peerConnection = null;
  }

  if (master.localStream) {
    master.localStream.getTracks().forEach(track => track.stop());
    master.localStream = null;
  }

  if (master.remoteStream) {
    master.remoteStream.getTracks().forEach(track => track.stop());
    master.remoteStream = null;
  }

  if (master.localView) {
    master.localView.srcObject = null;
  }

  if (master.remoteView) {
    master.remoteView.srcObject = null;
  }
};


Enter fullscreen mode Exit fullscreen mode

With that our Script.js file will be complete. In the application we need to update the following components and here is the logic behind how everything will flow.

How do a master and viewer start?

./views/options.vue

When a user starts a video call we will call the function to create a channel in our script.js file. Once a channel is created the user will be directed to the video component where the browser will ask them to enable the camera and microphone before they get to the meeting session

Alt Text

The function below will handle the actions required from the option component



 // go to video
    async gotovideo() {
      this.loading = true;
      // 1. store user as master
      let user = "MASTER";
      this.$store.dispatch("savecurrentuser", user);
      // 2a. Get the channel name
      let channel_name = randomize("Aa0", 10);
      await createsignal(channel_name);
      // 2b. Save channel name
      this.$store.dispatch("saveurl", channel_name);
      // set timeout
      setTimeout(() => {
        this.loading = false;
        this.$router.push("/video");
      }, 1000);
      // end
    }
    // end


Enter fullscreen mode Exit fullscreen mode

./views/home.vue

For a viewer, once they input the link and click join they will be directed straight to the session component. We will only save in our store that the current user is a VIEWER.

./views/session.vue

In our session component, we want to first determine who is the current user MASTER or VIEWER. To share your ICE Candidates you can use Websocket you created or use Signaling Client for Kinesis.



// get the current user and call the right action
    async getthecurrentuser() {
      if (this.currentuser === "MASTER") {
        this.initializemaster();
      } else if (this.currentuser === "VIEWER") {
        this.initializeviewer();
      } else {
        return;
      }
    },


Enter fullscreen mode Exit fullscreen mode

From there if it's the MASTER they will create an offer and make it available to any users who will join the channel. Once a new viewer joins the channel they will receive the offer, generate an answer, and thereafter exchange candidates and thereafter exchange of video.

That's the end of this tutorial. I will create a youtube tutorial.
Here are links for more understanding about WebRTC and Kinesis Video Streams

https://webrtc.org/
https://docs.aws.amazon.com/kinesisvideostreams-webrtc-dg/latest/devguide/what-is-kvswebrtc.html

I hope this was helpful.

Thank you

Top comments (1)

Collapse
 
cabudies profile image
Gurjas Singh

Hi Kevin, thanks for this. Do you have any youtube link where in I can check this out?