DEV Community

g4rry420
g4rry420

Posted on

Video Chat with WebRTC and Firebase

Hello everyone,
Today I am going to talk about webRTC.

In this post, you will how can you implement webRTC in JS and the simple explaination behind this API.

Understanding WebRTC

webRTC stands for web real time communication. The way this works is called peer-to-peer connection between the browser with a signaling state which is done by the server.

Key Terms To understand in webRTC

  • navigator.mediaDevices :- navigator.mediaDevices provides the access to the media devices which are connected to your machine like microphone and cameras. In this , there is a method called getUserMedia({video: true, audio: true}) which basically ask permissions from the user about the devices.
  • MediaStream() :- MediaStream constructor create a empty stream in which we can add audio and video track which will coming from our remote friend.
  • RTCPeerConnection() :- This is the core constructor which will help us to make connection between the local device and the remote device.It accepts a configuration to it describing the iceservers urls.
  • IceServers :- I won't go too much depth into iceservers but I just want say that there are mainly two servers TURN and STUN which helps the local and remote network to exchange one time network information for connectivity. You can learn more about them in this post.
  • localDescription :- This is the read-only property of RTCPeerConnection.localDescription which is intially null if nothing is passed into it. In this property, we set up our offer which will be send remotely to our remote device.
  • Offer :- This offer is created by createOffer() method in RTCPeerConnection constructor. If we call this medthod, it creates an sdp (session description protocol) which automatically attaches the getUserMedia if they are accepted by the user. SDP basically represents the information of audio, video and host address.
  • setRemoteDescription :- This method will accept the offer send by the local user to remote user.
  • Answer :- After the remoteDescription is set, we can get the answer with the createAnswer method
  • icecandidate :- icecandidate is the main hook event that let two connection knows that something is changed. This events is fired to the RTCPeerConnection whenever localDescription is setted. icecandidate actually contains a node of your network which we send to remote device so that they can know what connection endpoints to connect to. For example, if you go to the new city, you don't know all streets and direction of the city, you just only know the address that you want to go in that city. So, you take the help google maps or maybe something else to reach your destination. In the same way, icecandidate tells the currents location and RTC connection then determines the path to it (shortest path).

Code

Html should be like this :

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>WebRtc</title>
</head>
<body>
    <div>
        <button id="open"  className="btn btn-open">Open Microphone and Camera</button>
        <button id="create"  className="btn btn-open"> Create Room</button>
        <button id="join"  className="btn btn-open">Join Room</button>
        <button id="hangup"  className="btn btn-open">Hang Up</button>
    </div>
    <div>
        <video id="localVideo" autoplay playsinline></video>
        <video id="remoteVideo" autoplay playsinline></video>
    </div>

    <!-- The core Firebase JS SDK is always required and must be listed first -->
    <script src="https://www.gstatic.com/firebasejs/8.0.1/firebase-app.js"></script>
    <script src="https://www.gstatic.com/firebasejs/8.0.1/firebase-firestore.js"></script>
    <script>
    // Your web app's Firebase configuration
    // For Firebase JS SDK v7.20.0 and later, measurementId is optional
    var firebaseConfig = {
        apiKey: "",
        authDomain: "",
        databaseURL: "",
        projectId: "",
        storageBucket: "",
        messagingSenderId: "",
        appId: "",
        measurementId: ""
    };
    // Initialize Firebase
    firebase.initializeApp(firebaseConfig);

    let firestore = firebase.firestore();
    </script>
    <script src="./index.js"></script>
    <script src="./functions/openButtonFunc.js"></script>
    <script src="./functions/createButtonFunc.js"></script>
    <script src="./functions/joinButtonFunc.js"></script>
    <script src="./functions/hangupButtonFunc.js"></script>
    <script src="./events.js"></script>
</body>
</html>
Enter fullscreen mode Exit fullscreen mode

Create your firebase project replace firebaseConfig with your file.

okay, we will start with index.js file.

let localStream;
let remoteStream;
let roomId;
let roomIdData = [];
let peerConnection;

let configuration = {
    'iceServers': [
      {'urls': 'stun:stun.services.mozilla.com'},
      {'urls': 'stun:stun.l.google.com:19302'},
    ]
  }

//Reference to the Buttons
let openButton = document.getElementById("open");
let createButton = document.getElementById("create");
let joinButton = document.getElementById("join");
let hangupButton = document.getElementById("hangup");

createButton.disabled = true;
joinButton.disabled = true;
hangupButton.disabled = true;

// Reference to the Video Tags
let localVideo = document.getElementById("localVideo");
let remoteVideo = document.getElementById("remoteVideo");
Enter fullscreen mode Exit fullscreen mode

In this file, we have initialise some variables that we are going to be need in later in this project.

But one variable that I wanna talk about is configuration. Remember, I said that RTCPeerConnection requires a configuration, that configuration is in this variable.

Lets move, Now I want you to create new folder called functions and in it create a file called openButtonFunc.js
The content of it should be like this :-

const openButtonFunc = async () => {
    try {
        localStream = await navigator.mediaDevices.getUserMedia({ video: true, audio: true });
        localVideo.srcObject = localStream;

        remoteStream = new MediaStream();
        remoteVideo.srcObject = remoteStream;

        openButton.disabled = true;
        createButton.disabled = false;
        joinButton.disabled = false;
        hangupButton.disabled = false;
    } catch (error) {
        console.log(error)
    }
}
Enter fullscreen mode Exit fullscreen mode

In this file, when a user click on openButton, it will ask user for permissions and then displays the stream to the localVideo reference of video tag. At the same time , I have created a remoteStream with MediaStream() which will be useful to us later.

Now, in the same functions folder, create a file called createButtonFunc.js.
The contents should be like this :-

const createButtonFunc = async () => {
    peerConnection = new RTCPeerConnection(configuration);

    localStream.getTracks().forEach(track => {
        peerConnection.addTrack(track, localStream)
    })

    // Code for collecting ICE candidates below
    const roomRef = firestore.collection("rooms").doc();
    const callerCandidatesCollection = roomRef.collection("callerCandidates");

    peerConnection.addEventListener("icecandidate", event => {
      if(!event.candidate){
       //  console.log("Got Final Candidate!");
        return;
      }
     //  console.log('Got candidate: ', event.candidate);
     callerCandidatesCollection.add(event.candidate.toJSON());
    })
    // Code for collecting ICE candidates above


     // Code for creating a room below
     const offer = await peerConnection.createOffer();
     await peerConnection.setLocalDescription(offer);

     const roomWithOffer = {
       'offer': {
         type: offer.type,
         sdp: offer.sdp,
       },
       roomId: roomRef.id
     };
     await roomRef.set(roomWithOffer);
     roomId = roomRef.id;
     console.log(roomId)
     // Code for creating a room above

     peerConnection.addEventListener("track", event => {
        // console.log('Got remote track:', event.streams[0]);
        event.streams[0].getTracks().forEach(track => {
          // console.log('Add a track to the remoteStream:', track);
          remoteStream.addTrack(track);
        })
       })

       // Listening for remote session description below
      let unsubscribe = roomRef.onSnapshot(async snapshot => {
        const data = snapshot.data();
        if(peerConnection.iceConnectionState !== "closed"){

          if(!peerConnection.currentRemoteDescription && data && data.answer){
            // console.log('Got remote description: ', data.answer);
          const rtcSessionDescription = new RTCSessionDescription(data.answer);
          await peerConnection.setRemoteDescription(rtcSessionDescription);
          }

        }
      })
       // Listening for remote session description above

       // Listen for remote ICE candidates below
       let unsubscribe2 = roomRef.collection('calleeCandidates').onSnapshot(snapshot => {
        snapshot.docChanges().forEach(async change => {
          if (change.type === 'added') {
            let data = change.doc.data();
            // console.log(`Got new remote ICE candidate: ${JSON.stringify(data)}`);
            await peerConnection.addIceCandidate(new RTCIceCandidate(data));
          }
        });
      });
      // Listen for remote ICE candidates above

      return () => {
          unsubscribe();
          unsubscribe2();
      }

}
Enter fullscreen mode Exit fullscreen mode

In this file, first I have created a new RTCPeerConnection with the configuration variable.Then, I loop through the localStream.getTracks()(rememeber, localStream is the stream that we ask user for the permission) returns arrays of two tracks i.e, audio and video and then we pass it on the peerConnection.addTracks() method which accepts two parameters, a track and the stream(optional). In the second paramater, we pass the localStream, if we suppose didn't pass the stream the object then the method will run properly but in the remote device end, again we have to create a new MediaStream(). So, to maintain consistency, it is recommended to pass the stream.
Next, we get the reference to the rooms collection and in it we create a new collection called callerCandidates and added to icecandidate event Listener which will only get fired when setLocalDescription is setted in the peerConnection.

Side Note: I keep typing peerConnection variable which means I am talking about RTCPeerConnection that we created in the first place.

Moving ahead, in the icecandidate event listner, I have added event.candidate.toJson() to the callerCandiadates collection and it will only add data when event.candidate is null.

Next, I have created offer and setLocalDescription with the offer and also I have added the roomRef with the offer data.

Next, I added an another event listener to the peerConnection track which will only get fired when new track is added by the remote device user and then track is added to the remoteStream variable whose reference is to the remoteVideo.srcObject.

Moving further, I added onSnapshot listening method on the roomRef, which fires everytime something is changed in the roomRef document. If roomRef contains the answer object then it will set the answer object to the remoteSessionDescription which in turns calls the track event Listener and add the remote stream to it.

Okay, you might be thinking that, now with this method we got the remoteStream of the user and now there is no use of the last unsubscribe2 code. But you are wrong here. In the above the step, it tells the peerConnection that this is the answer and stream we got but then the peerConnection asks where is the location of it ? Then our last code for Listen for remote ICE candidates below came into help.

So, in the last step, we create a snapshot on the calleCandidates which is not created on our end, it will only created when a user joins the room and with we add a addIceCandidate which in turns triggers the icecandidate event.

I know the above explaination might sound confusing to you. Even I didn't understand the webRtc when I first read its documentation. But move to the joinButtonFunc.js code you will understand the links between them. You have only seen one side of the coin. Now, its time for the another.

Now, create a new file called joinButtonFunc.js in the functions folder and its contents are as follows :-

const joinButtonFunc = async () => {
    roomId = prompt("Enter a Room Id");

    peerConnection = new RTCPeerConnection(configuration);

    const roomRef = firestore.collection("rooms").doc(roomId);
    const roomSnapshot = await roomRef.get();

    if(roomSnapshot.exists){
        localStream.getTracks().forEach(track => {
            peerConnection.addTrack(track, localStream)
        })

        // Code for collecting ICE candidates below
        const calleeCandidatesCollection = roomRef.collection("calleCandidates");
        peerConnection.addEventListener("icecandidate", event => {
          if(!event.candidate){
            // console.log('Got final candidate!');
            return;
          }
          // console.log('Got candidate: ', event.candidate);
          calleeCandidatesCollection.add(event.candidate.toJSON());
        })
        // Code for collecting ICE candidates above

        peerConnection.addEventListener("track", event => {
            // console.log('Got remote track:', event.streams[0]);
            event.streams[0].getTracks().forEach(track => {
              // console.log('Add a track to the remoteStream:', track);
              remoteStream.addTrack(track);
            })
        })

        // Code for creating SDP answer below
        const offer = roomSnapshot.data().offer;
        // console.log('Got offer:', offer);
        await peerConnection.setRemoteDescription(new RTCSessionDescription(offer));
        const answer = await peerConnection.createAnswer();
        //   console.log('Created answer:', answer);
        await peerConnection.setLocalDescription(answer);

        const roomWithAnswer = {
          answer: {
            type: answer.type,
            sdp: answer.sdp,
          },
        };
        await roomRef.update(roomWithAnswer);
        // Code for creating SDP answer above

        // Listening for remote ICE candidates below
        let unsubscribe = roomRef.collection('callerCandidates').onSnapshot(snapshot => {
        snapshot.docChanges().forEach(async change => {
            if (change.type === 'added') {
            let data = change.doc.data();
            // console.log(`Got new remote ICE candidate: ${JSON.stringify(data)}`);
            await peerConnection.addIceCandidate(new RTCIceCandidate(data));
            }
        });
        });
        // Listening for remote ICE candidates 

        return () => unsubscribe()
    }
}
Enter fullscreen mode Exit fullscreen mode

So, in this file we first prompts user to add roomId to join the room and then we create a new peerConnection with the same configuration.
Remember, in the createButtonFunc.js, we get the reference of the callerCandidates but in this case, we get the reference of the calleCandidates and add icecandidate event listener to it and the same process goes.

Same process goes through the track event listener too.

Next, we get the offer from the roomSnapshot where the local user store and setRemoteDescription with it. Once the remote description is setted with the offer, now , we can get the answer from it and update it to with roomRef.

Then, in the final call we listen for the callerCandidates collection and addIceCandidates with use of it.

Remember, these all events are interlinked, because of which the application is functiontioning properly.

Note that, in the createButtonFunc.js, we add icecandidate evenListener and in it the candidates are added to the callerCandidates collection.
And in the joinButtonFunc.js, in the final step of unsubscribe listener, we are looking for the changes in the callerCandidates document and adding addIceCandidate data to it which leads to trigger different events.
The point I am trying to make is that these all events are inter-connected.

Okay, Now adding our next file for hangupButton which you should create in the functions folder called hangupButtonFunc.js and contents are as follows:-

const hangupButtonFunc = async () => {
    const tracks = localVideo.srcObject.getTracks();
    tracks.forEach(track => track.stop());

    if(remoteStream){
        remoteStream.getTracks().forEach(track => track.stop())
    }

    if(peerConnection){
        peerConnection.close();
    }

    //Delete a room on hangup below
    if(roomId){
        const roomRef = firestore.collection("rooms").doc(roomId);
        const calleeCandidates = await roomRef.collection('calleeCandidates').get();
        calleeCandidates.forEach(async candidate => {
          await candidate.ref.delete();
        });
        const callerCandidates = await roomRef.collection('callerCandidates').get();
        callerCandidates.forEach(async candidate => {
          await candidate.ref.delete();
        });
        await roomRef.delete();
    }
    //Delete a room on hangup above

    openButton.disabled = false;
    createButton.disabled = true;
    joinButton.disabled = true;
    hangupButton.disabled = true;

    document.location.reload(true);
}
Enter fullscreen mode Exit fullscreen mode

In this file, we are just grabbing the tracks from the localStream and remoteStream and are stopping them. Also we are closing our peerConnection and deleting the documents in the firebase that we have created.

Now, there is only final file remaining called events.js and created this file outside of functions folder.

openButton.addEventListener("click", openButtonFunc);
createButton.addEventListener("click", createButtonFunc);
joinButton.addEventListener("click", joinButtonFunc);
hangupButton.addEventListener("click", hangupButtonFunc);
Enter fullscreen mode Exit fullscreen mode

Conclusion

I have showed you how to create a application with webRtc and firebase. To take application to the next level you can screen share functionality and you can learn about from this.

This post is inspired from the webRTC documentation.

Feel free to add any comment if you think I might have mentioned something wrong.

Thank You for your time to read this post.
Happy Coding:)

Top comments (5)

Collapse
 
sudarshan14 profile image
Sudarshan Bhatt

I am getting this error.

main.js:126 Uncaught TypeError: snapshot.docChanges is not a function

answerCandidates.onSnapshot(snapshot => {
snapshot.docChanges().forEach( async change => {
// if(snapshot.data() != undefined) {
if(change.type==="added"){
// const data = snapshot.data();
const data = change.doc.data();
const candidate = new RTCIceCandidate(data);
await pc.addIceCandidate(candidate);
}
});
});

Collapse
 
generaldaysnow profile image
พพ

Hello, I got the problem ;-;
remoteVideo display nothing i do not know why, i checked the tracks that add to MediaStream (remoteStream) i think everything is fine but it doesnt work. No error happened. I have no clue :-: please tell me the solutions.

Collapse
 
tjsudarsan profile image
Sudarsan • Edited

Your code looks plain, in the sense no colors in the code to differ variables, functions, etc.

Use markdown codeblock with js

Collapse
 
g4rry420 profile image
g4rry420

Will do that, Thank You

Collapse
 
waheed45103 profile image
waheed45103

Code is working correctly on the same computer. But when on two different computers the remote video is not showing