DEV Community

Charalotte Yog
Charalotte Yog

Posted on

How To Build WebRTC Video Call with React Native?

React Native: React Native also referred to as RN is an extensively used JavaScript-based mobile app framework. With React native developers can build natively-rendered mobile apps for iOS and Android. Using the same codebase, developers can create an application for various platforms with the help of React Native framework.
React Hooks: The React version 16.8 introduced Hooks as means to add other React features, like lifecycle methods, without writing a class. With Hooks, developers can use functions instead of having to constantly switch between functions, classes, higher-order components, render props, etc.

Image description

In this guide, we will be creating the following:

· Video calling application for Android + IOS
· Implementing webrtc video chat using the package “react-native- WebRTC”
· Using Web Sockets for signaling
· Using UI components from ‘react-native-paper”

Steps to Build WebRTC Video Call with React Native:

What is WebRTC?
WebRTC (Web Real-Time Communication) is an open-source project and permits the transmission of audio, video, and data. WebRTC is a technology that offers peer-to-peer communication between web browsers and mobile applications. It also enables users to exchange arbitrary data between browsers without requiring an intermediary.
Steps:

  1. Firstly, you need to set up a cohesive development environment for React Native. Meaning you as a developer will need to install and build your React Native app. Follow here https://reactnative.dev/docs/environment-setup

  2. Successfully run your demo application and then install some React Native Libraries for UI and Navigation
    Install the package.json dependencies given below:

"dependencies": {
"@react-native-community/async-storage": "^1.10.0",
"@react-native-community/masked-view": "^0.1.10",
"@react-navigation/native": "^5.2.3",
"@react-navigation/stack": "^5.2.18",
"react": "16.11.0",
"react-native": "0.62.2",
"react-native-gesture-handler": "^1.6.1",
"react-native-incall-manager": "^3.2.7",
"react-native-paper": "^3.9.0",
"react-native-reanimated": "^1.8.0",
"react-native-safe-area-context": "^0.7.3",
"react-native-screens": "^2.7.0",
"react-native-vector-icons": "^6.6.0",
"react-native-webrtc": "^1.75.3" 
}
Enter fullscreen mode Exit fullscreen mode

3.Build Navigation by using react-navigation as noted in the project dependencies list.
App.js

import React from 'react';
import {NavigationContainer} from '@react-navigation/native';
import {createStackNavigator} from '@react-navigation/stack';

import LoginScreen from './screens/LoginScreen';
import CallScreen from './screens/CallScreen';
import {SafeAreaView} from 'react-native-safe-area-context';

const Stack = createStackNavigator();

const App = () => {
  return (
    <NavigationContainer>
    <Stack.Navigator>
        <Stack.Screen
        name="Login"
        component={LoginScreen}
        options={{headerShown: false}}
        />
        <Stack.Screen name="Call" component={CallScreen} />
    </Stack.Navigator>
    </NavigationContainer>
  );
};

export default App;

Enter fullscreen mode Exit fullscreen mode

4.Import Login and Call screens in the App Component and then create Login screen.
LoginScreen.js

import React, {useState} from 'react';
import {View, StyleSheet} from 'react-native';
import {Text} from 'react-native-paper';
import {TextInput} from 'react-native-paper';
import AsyncStorage from '@react-native-community/async-storage';
import {Button} from 'react-native-paper';

export default function LoginScreen(props) {
  const [userId, setUserId] = useState('');
  const [loading, setLoading] = useState(false);

  const onLogin = async () => {
    setLoading(true);
    try {
    await AsyncStorage.setItem('userId', userId);
    setLoading(false);
    props.navigation.push('Call');
    } catch (err) {
    console.log('Error', err);
    setLoading(false);
    }
  };

  return (
    <View style={styles.root}>
    <View style={styles.content}>
        <Text style={styles.heading}>Enter your id</Text>
        <TextInput
        label="Your  ID"
        onChangeText={text => setUserId(text)}
        mode="outlined"
        style={styles.input}
        />

        <Button
        mode="contained"
        onPress={onLogin}
        loading={loading}
        style={styles.btn}
        contentStyle={styles.btnContent}
        disabled={userId.length === 0}>
        Login
        </Button>
    </View>
    </View>
  );
}

const styles = StyleSheet.create({
  root: {
    backgroundColor: '#fff',
    flex: 1,
    // alignItems: 'center',
    justifyContent: 'center',
  },
  content: {
    // alignSelf: 'center',
    paddingHorizontal: 20,
    justifyContent: 'center',
  },
  heading: {
    fontSize: 18,
    marginBottom: 10,
    fontWeight: '600',
  },
  input: {
    height: 60,
    marginBottom: 10,
  },
  btn: {
    height: 60,
    alignItems: 'stretch',
    justifyContent: 'center',
    fontSize: 18,
  },
  btnContent: {
    alignItems: 'center',
    justifyContent: 'center',
    height: 60,
  },
});

Enter fullscreen mode Exit fullscreen mode

In the above file, a unique USER ID will represent this user and can be referred by another connected user, developers may assign a unique id for any of the users at this stage.

THE MAIN CODE FOR IMPLEMENTING WEBRTC:

5.The Call Screen Code:

import React, {useEffect, useState, useCallback} from 'react';
import {View, StyleSheet, Alert} from 'react-native';
import {Text} from 'react-native-paper';
import {Button} from 'react-native-paper';
import AsyncStorage from '@react-native-community/async-storage';
import {TextInput} from 'react-native-paper';

import {useFocusEffect} from '@react-navigation/native';

import InCallManager from 'react-native-incall-manager';

import {
  RTCPeerConnection,
  RTCIceCandidate,
  RTCSessionDescription,
  RTCView,
  MediaStream,
  MediaStreamTrack,
  mediaDevices,
  registerGlobals,
} from 'react-native-webrtc';

export default function CallScreen({navigation, ...props}) {
  let name;
  let connectedUser;
  const [userId, setUserId] = useState('');
  const [socketActive, setSocketActive] = useState(false);
  const [calling, setCalling] = useState(false);
  // Video Scrs
  const [localStream, setLocalStream] = useState({toURL: () => null});
  const [remoteStream, setRemoteStream] = useState({toURL: () => null});
  const [conn, setConn] = useState(new WebSocket('ws://3.20.188.26:8080'));
  const [yourConn, setYourConn] = useState(
    //change the config as you need
    new RTCPeerConnection({
    iceServers: [
        {
        urls: 'stun:stun.l.google.com:19302', 
        }, {
        urls: 'stun:stun1.l.google.com:19302',  
        }, {
        urls: 'stun:stun2.l.google.com:19302',  
        }

    ],
    }),
  );

  const [offer, setOffer] = useState(null);

  const [callToUsername, setCallToUsername] = useState(null);

  useFocusEffect(
    useCallback(() => {
    AsyncStorage.getItem('userId').then(id => {
        console.log(id);
        if (id) {
        setUserId(id);
        } else {
        setUserId('');
        navigation.push('Login');
        }
    });
    }, [userId]),
  );

  useEffect(() => {
    navigation.setOptions({
    title: 'Your ID - ' + userId,
    headerRight: () => (
        <Button mode="text" onPress={onLogout} style={{paddingRight: 10}}>
        Logout
        </Button>
    ),
    });
  }, [userId]);

  /**
   * Calling Stuff
   */

  useEffect(() => {
    if (socketActive && userId.length > 0) {
    try {
        InCallManager.start({media: 'audio'});
        InCallManager.setForceSpeakerphoneOn(true);
        InCallManager.setSpeakerphoneOn(true);
    } catch (err) {
        console.log('InApp Caller ---------------------->', err);
    }

    console.log(InCallManager);

    send({
        type: 'login',
        name: userId,
    });
    }
  }, [socketActive, userId]);

  const onLogin = () => {};

  useEffect(() => {
    /**
    *
    * Sockets Signalling
    */
    conn.onopen = () => {
    console.log('Connected to the signaling server');
    setSocketActive(true);
    };
    //when we got a message from a signaling server
    conn.onmessage = msg => {
    let data;
    if (msg.data === 'Hello world') {
        data = {};
    } else {
        data = JSON.parse(msg.data);
        console.log('Data --------------------->', data);
        switch (data.type) {
        case 'login':
            console.log('Login');
            break;
        //when somebody wants to call us
        case 'offer':
            handleOffer(data.offer, data.name);
            console.log('Offer');
            break;
        case 'answer':
            handleAnswer(data.answer);
            console.log('Answer');
            break;
        //when a remote peer sends an ice candidate to us
        case 'candidate':
            handleCandidate(data.candidate);
            console.log('Candidate');
            break;
        case 'leave':
            handleLeave();
            console.log('Leave');
            break;
        default:
            break;
        }
    }
    };
    conn.onerror = function(err) {
    console.log('Got error', err);
    };
    /**
    * Socjket Signalling Ends
    */

    let isFront = false;
    mediaDevices.enumerateDevices().then(sourceInfos => {
    let videoSourceId;
    for (let i = 0; i < sourceInfos.length; i++) {
        const sourceInfo = sourceInfos[i];
        if (
        sourceInfo.kind == 'videoinput' &&
        sourceInfo.facing == (isFront ? 'front' : 'environment')
        ) {
        videoSourceId = sourceInfo.deviceId;
        }
    }
    mediaDevices
        .getUserMedia({
        audio: true,
          video: {
            mandatory: {
            minWidth: 500, // Provide your own width, height and frame rate here
            minHeight: 300,
            minFrameRate: 30,
            },
            facingMode: isFront ? 'user' : 'environment',
            optional: videoSourceId ? [{sourceId: videoSourceId}] : [],
        },
        })
        .then(stream => {
        // Got stream!
        setLocalStream(stream);

        // setup stream listening
        yourConn.addStream(stream);
        })
        .catch(error => {
        // Log error
        });
    });

    yourConn.onaddstream = event => {
    console.log('On Add Stream', event);
    setRemoteStream(event.stream);
    };

    // Setup ice handling
    yourConn.onicecandidate = event => {
    if (event.candidate) {
        send({
        type: 'candidate',
        candidate: event.candidate,
        });
    }
    };
  }, []);

  const send = message => {
    //attach the other peer username to our messages
    if (connectedUser) {
    message.name = connectedUser;
    console.log('Connected iser in end----------', message);
    }

    conn.send(JSON.stringify(message));
  };

  const onCall = () => {
    setCalling(true);

    connectedUser = callToUsername;
    console.log('Caling to', callToUsername);
    // create an offer

    yourConn.createOffer().then(offer => {
      yourConn.setLocalDescription(offer).then(() => {
        console.log('Sending Ofer');
        console.log(offer);
        send({
        type: 'offer',
        offer: offer,
        });
        // Send pc.localDescription to peer
    });
    });
  };

  //when somebody sends us an offer
  const handleOffer = async (offer, name) => {
    console.log(name + ' is calling you.');

    console.log('Accepting Call===========>', offer);
    connectedUser = name;

    try {
    await yourConn.setRemoteDescription(new RTCSessionDescription(offer));

    const answer = await yourConn.createAnswer();

    await yourConn.setLocalDescription(answer);
    send({
        type: 'answer',
        answer: answer,
    });
    } catch (err) {
    console.log('Offerr Error', err);
    }
  };

  //when we got an answer from a remote user
  const handleAnswer = answer => {
    yourConn.setRemoteDescription(new RTCSessionDescription(answer));
  };

  //when we got an ice candidate from a remote user
  const handleCandidate = candidate => {
    setCalling(false);
    console.log('Candidate ----------------->', candidate);
    yourConn.addIceCandidate(new RTCIceCandidate(candidate));
  };

  //hang up
  const hangUp = () => {
    send({
    type: 'leave',
    });

    handleLeave();
  };

  const handleLeave = () => {
    connectedUser = null;
    setRemoteStream({toURL: () => null});

    yourConn.close();
    // yourConn.onicecandidate = null;
    // yourConn.onaddstream = null;
  };

  const onLogout = () => {
    // hangUp();

    AsyncStorage.removeItem('userId').then(res => {
    navigation.push('Login');
    });
  };

  const acceptCall = async () => {
    console.log('Accepting Call===========>', offer);
    connectedUser = offer.name;

    try {
    await yourConn.setRemoteDescription(new RTCSessionDescription(offer));

    const answer = await yourConn.createAnswer();

    await yourConn.setLocalDescription(answer);

    send({
        type: 'answer',
        answer: answer,
    });
    } catch (err) {
    console.log('Offerr Error', err);
    }
  };
  const rejectCall = async () => {
    send({
    type: 'leave',
    });
    ``;
    setOffer(null);

    handleLeave();
  };

  /**
   * Calling Stuff Ends
   */

  return (
    <View style={styles.root}>
    <View style={styles.inputField}>
        <TextInput
        label="Enter Friends Id"
        mode="outlined"
        style={{marginBottom: 7}}
        onChangeText={text => setCallToUsername(text)}
        />
        <Button
        mode="contained"
        onPress={onCall}
        loading={calling}
        //   style={styles.btn}
        contentStyle={styles.btnContent}
        disabled={!(socketActive && userId.length > 0)}>
        Call
        </Button>
    </View>

    <View style={styles.videoContainer}>
        <View style={[styles.videos, styles.localVideos]}>
        <Text>Your Video</Text>
        <RTCView streamURL={localStream.toURL()} style={styles.localVideo} />
        </View>
        <View style={[styles.videos, styles.remoteVideos]}>
        <Text>Friends Video</Text>
        <RTCView
            streamURL={remoteStream.toURL()}
            style={styles.remoteVideo}
        />
        </View>
    </View>
    </View>
  );
}

const styles = StyleSheet.create({
  root: {
    backgroundColor: '#fff',
    flex: 1,
    padding: 20,
  },
  inputField: {
    marginBottom: 10,
    flexDirection: 'column',
  },
  videoContainer: {
    flex: 1,
    minHeight: 450,
  },
  videos: {
    width: '100%',
    flex: 1,
    position: 'relative',
    overflow: 'hidden',

    borderRadius: 6,
  },
  localVideos: {
    height: 100,
    marginBottom: 10,
  },
  remoteVideos: {
    height: 400,
  },
  localVideo: {
    backgroundColor: '#f2f2f2',
    height: '100%',
    width: '100%',
  },
  remoteVideo: {
    backgroundColor: '#f2f2f2',
    height: '100%',
    width: '100%',
  },
});


Enter fullscreen mode Exit fullscreen mode

The Main Parts to Be Addressed in the Above-Mentioned Codes:

Image description
settinguserID: the user which is currently logged in
connectedUser: while making a call to another user using his ID this variable is assigned to that users ID
Video/Audio Streams:
localStream: Accessing local video stream of a user using his/her mobile camera
remoteStream: When a call is connected the receiver video stream is added to this stream variable
Please note that to access video streams you need to set some permissions in the Android and iOS files. Check permissions here:
Android — https://github.com/react-native-webrtc/react-native-webrtc/blob/master/Documentation/AndroidInstallation.md
IOS — https://github.com/react-native-webrtc/react-native-webrtc/blob/master/Documentation/iOSInstallation.md
conn: this establishes your WebSocket connection
yourConn: this establishes your RTCPeerConnection which will be further used for setting local/remote descriptions and offers

Socket Events:

onmessage: fires when a message is received
Message Types & Handlers:
Offer: It is simply the process where one user tries to connect with another user. The sender creates and sends an offer, while the receiver acknowledges and receives the offer. This is known as handleOffer.
Candidate: The receiver is referred to as the candidate. The handle candidate fires signaling that the call has been ringing or awaiting an answer.
Answer: The receiver gets an offer that can be accepted, rejected, or stored to address later. An answer can be created for the offer or can reject/leave. If accepted, the offer will be added to the local stream of connections so that the sender will get to see the video stream, create an answer for the same and send it.
Handling Answer: After the receiver sends us an answer, which means the call is accepted now and the sender’s video is visible.
Leave: Call hangUp means end the connected call or reject the incoming offer, then rest all the variables close the RTCPeerConnection.
The Process:
· Login: send message to the socket for login
· Local Stream: get video stream from the local user
· Add Event Listener for ice candidate update
· Event Listener OnTrack triggers when a remote user adds his/her stream to this RTCPeerConnection then we will display it.

Takeaway:

Now that you know how to build webrtc video chat app feature with React Native, you can set the ball rolling! However, if you are finding it hard a simple way is to opt for MirrorFly programmable video call APIs. Not just React Native or WebRTC but with MirrorFly’s overall API implementation, developers can have the ease on concentrating on other elements of app development rather than spending time building everything from starch. To speak with an expert or for a live demo simply follow this link https://www.mirrorfly.com/webrtc-video-chat.php

Top comments (0)