In a multi-participant video call, it can be challenging for participants to engage in meaningful discussions without interrupting each other. And also difficult to enhance applications with real-time messaging capabilities, facilitating communication between participants during video calls. In this scenario, strive to create more immersive and engaging user experiences, features like chat have emerged as key components.
It allows users to send messages, share files, and exchange information without interrupting the audio or video feed. This is particularly useful when participants need to ask questions, provide feedback, or share relevant resources during a video call. This article aims to guide you through the process of integrating a chat feature into a React Native video call app using VideoSDK.
Goals
By the End of this Article, we'll:
- Create a VideoSDK account and generate your VideoSDK auth token.
- Integrate the VideoSDK library and dependencies into your project.
- Implement core functionalities for video calls using VideoSDK.
- Enable the Chat feature in your app.
Getting Started with VideoSDK
To take advantage of the chat functionality, we must use the capabilities that the VideoSDK offers. Before diving into the implementation steps, let's ensure you complete the necessary prerequisites.
Create a VideoSDK Account
Go to your VideoSDK dashboard and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.
Generate your Auth Token
Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features.
For a more visual understanding of the account creation and token generation process, consider referring to the provided tutorial.
Prerequisites and Setup
Make sure your development environment meets the following requirements:
- Node.js v12+
- NPM v6+ (comes installed with newer Node versions)
- Android Studio or Xcode installed
Install VideoSDK Config.
It is necessary to set up VideoSDK within your project before going into the details of integrating the Image Capture feature. Installing VideoSDK using NPM or Yarn will depend on the needs of your project.
- For NPM
npm install "@videosdk.live/react-native-sdk" "@videosdk.live/react-native-incallmanager"
- For Yarn
yarn add "@videosdk.live/react-native-sdk" "@videosdk.live/react-native-incallmanager"
Project Configuration
Before integrating the Image Capture functionality, ensure that your project is correctly prepared to handle the integration. This setup consists of a sequence of steps for configuring rights, dependencies, and platform-specific parameters so that VideoSDK can function seamlessly inside your application context.
Android Setup
- Add the required permissions in the
AndroidManifest.xml
file. - Update your
colors.xml
file for internal dependencies. - Link the necessary VideoSDK Dependencies.
- Include the following line in your
proguard-rules.pro
file (optional: if you are using Proguard) - In your
build.gradle
file, update the minimum OS/SDK version to23
.
<manifest
xmlns:android="http://schemas.android.com/apk/res/android"
package="com.cool.app"
>
<!-- Give all the required permissions to app -->
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<!-- Needed to communicate with already-paired Bluetooth devices. (Legacy up to Android 11) -->
<uses-permission
android:name="android.permission.BLUETOOTH"
android:maxSdkVersion="30" />
<uses-permission
android:name="android.permission.BLUETOOTH_ADMIN"
android:maxSdkVersion="30" />
<!-- Needed to communicate with already-paired Bluetooth devices. (Android 12 upwards)-->
<uses-permission android:name="android.permission.BLUETOOTH_CONNECT" />
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW" />
<uses-permission android:name="android.permission.FOREGROUND_SERVICE"/>
<uses-permission android:name="android.permission.WAKE_LOCK" />
<application>
<meta-data
android:name="live.videosdk.rnfgservice.notification_channel_name"
android:value="Meeting Notification"
/>
<meta-data
android:name="live.videosdk.rnfgservice.notification_channel_description"
android:value="Whenever meeting started notification will appear."
/>
<meta-data
android:name="live.videosdk.rnfgservice.notification_color"
android:resource="@color/red"
/>
<service android:name="live.videosdk.rnfgservice.ForegroundService" android:foregroundServiceType="mediaProjection"></service>
<service android:name="live.videosdk.rnfgservice.ForegroundServiceTask"></service>
</application>
</manifest>
<resources>
<item name="red" type="color">
#FC0303
</item>
<integer-array name="androidcolors">
<item>@color/red</item>
</integer-array>
</resources>
dependencies {
implementation project(':rnwebrtc')
implementation project(':rnfgservice')
}
include ':rnwebrtc'
project(':rnwebrtc').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-webrtc/android')
include ':rnfgservice'
project(':rnfgservice').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-foreground-service/android')
import live.videosdk.rnwebrtc.WebRTCModulePackage;
import live.videosdk.rnfgservice.ForegroundServicePackage;
public class MainApplication extends Application implements ReactApplication {
private static List<ReactPackage> getPackages() {
@SuppressWarnings("UnnecessaryLocalVariable")
List<ReactPackage> packages = new PackageList(this).getPackages();
// Packages that cannot be autolinked yet can be added manually here, for example:
packages.add(new ForegroundServicePackage());
packages.add(new WebRTCModulePackage());
return packages;
}
}
/* This one fixes a weird WebRTC runtime problem on some devices. */
android.enableDexingArtifactTransform.desugaring=false
-keep class org.webrtc.** { *; }
buildscript {
ext {
minSdkVersion = 23
}
}
iOS Setup
IMPORTANT: Ensure that you are using CocoaPods version 1.10 or later.
- To update CocoaPods, you can reinstall the
gem
using the following command:
$ sudo gem install cocoapods
2. Manually link react-native-incall-manager (if it is not linked automatically).
Select Your_Xcode_Project/TARGETS/BuildSettings
, in Header Search Paths, add "$(SRCROOT)/../node_modules/@videosdk.live/react-native-incall-manager/ios/RNInCallManager"
3. Change the path of react-native-webrtc
using the following command:
pod ‘react-native-webrtc’, :path => ‘../node_modules/@videosdk.live/react-native-webrtc’
4. Change the version of your platform.
You need to change the platform field in the Podfile to 12.0 or above because react-native-webrtc doesn't support iOS versions earlier than 12.0. Update the line: platform: ios, ‘12.0’.
5. Install pods.
After updating the version, you need to install the pods by running the following command:
Pod install
6. Add “libreact-native-webrtc.a” binary.
Add the "libreact-native-webrtc.a" binary to the "Link Binary With Libraries" section in the target of your main project folder.
7. Declare permissions in Info.plist :
Add the following lines to your info.plist file located at (project folder/ios/projectname/info.plist):
<key>NSCameraUsageDescription</key>
<string>Camera permission description</string>
<key>NSMicrophoneUsageDescription</key>
<string>Microphone permission description</string>
Register Service
Register VideoSDK services in your root index.js
file for the initialization service.
import { AppRegistry } from "react-native";
import App from "./App";
import { name as appName } from "./app.json";
import { register } from "@videosdk.live/react-native-sdk";
register();
AppRegistry.registerComponent(appName, () => App);
Essential Steps for Implement the Video Calling Functionality
Step 1: Get started with api.js
Before moving on, you must create an API request to generate a unique meetingId. You will need an authentication token, which you can create either through the videosdk-rtc-api-server-examples or directly from the VideoSDK Dashboard for developers.
export const token = "<Generated-from-dashbaord>";
// API call to create meeting
export const createMeeting = async ({ token }) => {
const res = await fetch(`https://api.videosdk.live/v2/rooms`, {
method: "POST",
headers: {
authorization: `${token}`,
"Content-Type": "application/json",
},
body: JSON.stringify({}),
});
const { roomId } = await res.json();
return roomId;
};
Step 2: Wireframe App.js with all the components
To build up a wireframe of App.js, you need to use VideoSDK Hooks and Context Providers. VideoSDK provides MeetingProvider, MeetingConsumer, useMeeting, and useParticipant hooks.
First, you need to understand the Context Provider and Consumer. Context is primarily used when some data needs to be accessible by many components at different nesting levels.
-
MeetingProvider : This is the Context Provider. It accepts value
config
andtoken
as props. The Provider component accepts a value prop to be passed to consuming components that are descendants of this Provider. One Provider can be connected to many consumers. Providers can be nested to override values deeper within the tree. - MeetingConsumer : This is the Context Consumer. All consumers that are descendants of a Provider will re-render whenever the Provider’s value prop changes.
- useMeeting : This is the meeting hook API. It includes all the information related to meeting such as join, leave, enable/disable mic or webcam etc.
- useParticipant : This is the participant hook API. It is responsible for handling all the events and props related to one particular participant such as name, webcamStream, micStream etc.
The Meeting Context provides a way to listen for any changes that occur when a participant joins the meeting or makes modifications to their microphone, camera, and other settings.
Begin by making a few changes to the code in the App.js file.
import React, { useState } from "react";
import {
SafeAreaView,
TouchableOpacity,
Text,
TextInput,
View,
FlatList,
} from "react-native";
import {
MeetingProvider,
useMeeting,
useParticipant,
MediaStream,
RTCView,
} from "@videosdk.live/react-native-sdk";
import { createMeeting, token } from "./api";
function JoinScreen(props) {
return null;
}
function ControlsContainer() {
return null;
}
function MeetingView() {
return null;
}
export default function App() {
const [meetingId, setMeetingId] = useState(null);
const getMeetingId = async (id) => {
const meetingId = id == null ? await createMeeting({ token }) : id;
setMeetingId(meetingId);
};
return meetingId ? (
<SafeAreaView style={{ flex: 1, backgroundColor: "#F6F6FF" }}>
<MeetingProvider
config={{
meetingId,
micEnabled: false,
webcamEnabled: true,
name: "Test User",
}}
token={token}
>
<MeetingView />
</MeetingProvider>
</SafeAreaView>
) : (
<JoinScreen getMeetingId={getMeetingId} />
);
}
Step 3: Implement Join Screen
The join screen will serve as a medium to either schedule a new meeting or join an existing one.
function JoinScreen(props) {
const [meetingVal, setMeetingVal] = useState("");
return (
<SafeAreaView
style={{
flex: 1,
backgroundColor: "#F6F6FF",
justifyContent: "center",
paddingHorizontal: 6 * 10,
}}
>
<TouchableOpacity
onPress={() => {
props.getMeetingId();
}}
style={{ backgroundColor: "#1178F8", padding: 12, borderRadius: 6 }}
>
<Text style={{ color: "white", alignSelf: "center", fontSize: 18 }}>
Create Meeting
</Text>
</TouchableOpacity>
<Text
style={{
alignSelf: "center",
fontSize: 22,
marginVertical: 16,
fontStyle: "italic",
color: "grey",
}}
>
---------- OR ----------
</Text>
<TextInput
value={meetingVal}
onChangeText={setMeetingVal}
placeholder={"XXXX-XXXX-XXXX"}
style={{
padding: 12,
borderWidth: 1,
borderRadius: 6,
fontStyle: "italic",
}}
/>
<TouchableOpacity
style={{
backgroundColor: "#1178F8",
padding: 12,
marginTop: 14,
borderRadius: 6,
}}
onPress={() => {
props.getMeetingId(meetingVal);
}}
>
<Text style={{ color: "white", alignSelf: "center", fontSize: 18 }}>
Join Meeting
</Text>
</TouchableOpacity>
</SafeAreaView>
);
}
Step 4: Implement Controls
The next step is to create a ControlsContainer
component to manage features such as Join or leave a Meeting and Enable or Disable the Webcam/Mic.
In this step, the useMeeting
hook is utilized to acquire all the required methods such as join()
, leave()
, toggleWebcam
and toggleMic
.
const Button = ({ onPress, buttonText, backgroundColor }) => {
return (
<TouchableOpacity
onPress={onPress}
style={{
backgroundColor: backgroundColor,
justifyContent: "center",
alignItems: "center",
padding: 12,
borderRadius: 4,
}}
>
<Text style={{ color: "white", fontSize: 12 }}>{buttonText}</Text>
</TouchableOpacity>
);
};
function ControlsContainer({ join, leave, toggleWebcam, toggleMic }) {
return (
<View
style={{
padding: 24,
flexDirection: "row",
justifyContent: "space-between",
}}
>
<Button
onPress={() => {
join();
}}
buttonText={"Join"}
backgroundColor={"#1178F8"}
/>
<Button
onPress={() => {
toggleWebcam();
}}
buttonText={"Toggle Webcam"}
backgroundColor={"#1178F8"}
/>
<Button
onPress={() => {
toggleMic();
}}
buttonText={"Toggle Mic"}
backgroundColor={"#1178F8"}
/>
<Button
onPress={() => {
leave();
}}
buttonText={"Leave"}
backgroundColor={"#FF0000"}
/>
</View>
);
}
function ParticipantList() {
return null;
}
function MeetingView() {
const { join, leave, toggleWebcam, toggleMic, meetingId } = useMeeting({});
return (
<View style={{ flex: 1 }}>
{meetingId ? (
<Text style={{ fontSize: 18, padding: 12 }}>
Meeting Id :{meetingId}
</Text>
) : null}
<ParticipantList />
<ControlsContainer
join={join}
leave={leave}
toggleWebcam={toggleWebcam}
toggleMic={toggleMic}
/>
</View>
);
}
Step 5: Render Participant List
After implementing the controls, the next step is to render the joined participants.
You can get all the joined participants
from the useMeeting
Hook.
function ParticipantView() {
return null;
}
function ParticipantList({ participants }) {
return participants.length > 0 ? (
<FlatList
data={participants}
renderItem={({ item }) => {
return <ParticipantView participantId={item} />;
}}
/>
) : (
<View
style={{
flex: 1,
backgroundColor: "#F6F6FF",
justifyContent: "center",
alignItems: "center",
}}
>
<Text style={{ fontSize: 20 }}>Press Join button to enter meeting.</Text>
</View>
);
}
function MeetingView() {
// Get `participants` from useMeeting Hook
const { join, leave, toggleWebcam, toggleMic, participants } = useMeeting({});
const participantsArrId = [...participants.keys()];
return (
<View style={{ flex: 1 }}>
<ParticipantList participants={participantsArrId} />
<ControlsContainer
join={join}
leave={leave}
toggleWebcam={toggleWebcam}
toggleMic={toggleMic}
/>
</View>
);
}
Step 6: Handling Participant's Media
Before Handling the Participant's Media, you need to understand a couple of concepts.
1. useParticipant Hook
The useParticipant
hook is responsible for handling all the properties and events of one particular participant who joined the meeting. It will take participantId
as argument.
const { webcamStream, webcamOn, displayName } = useParticipant(participantId);
2. MediaStream API
The MediaStream API is beneficial for adding a MediaTrack to the RTCView
component, enabling the playback of audio or video.
<RTCView
streamURL={new MediaStream([webcamStream.track]).toURL()}
objectFit={"cover"}
style={{
height: 300,
marginVertical: 8,
marginHorizontal: 8,
}}
/>
Rendering Participant Media
function ParticipantView({ participantId }) {
const { webcamStream, webcamOn } = useParticipant(participantId);
return webcamOn && webcamStream ? (
<RTCView
streamURL={new MediaStream([webcamStream.track]).toURL()}
objectFit={"cover"}
style={{
height: 300,
marginVertical: 8,
marginHorizontal: 8,
}}
/>
) : (
<View
style={{
backgroundColor: "grey",
height: 300,
justifyContent: "center",
alignItems: "center",
}}
>
<Text style={{ fontSize: 16 }}>NO MEDIA</Text>
</View>
);
}
Congratulations! By following these steps, you're on your way to unlocking the video within your application. Now, we are moving forward to integrate the feature that builds immersive video experiences for your users!
Integrate Chat Feature
For communication or any kind of messaging between participants, VideoSDK provides the usePubSub
hook, which utilizes the Publish-Subscribe mechanism. It can be employed to develop a wide variety of functionalities. For example, participants could use it to send chat messages to each other, share files or other media, or even trigger actions like muting or unmuting audio or video.
This guide focuses on using PubSub to implement Chat functionality. If you are not familiar with the PubSub mechanism and usePubSub
hook, you can follow this guide.
Implementing Chat
The initial step in setting up a group chat involves selecting a topic to which all participants will publish and subscribe, facilitating the exchange of messages. In the following example, CHAT is used as the topic. Next, obtain the publish()
method and the messages array from the usePubSub
hook.
Step 1: Add another button in ControlsContainer
to enable chat functionality and open the Chat Modal.
// MeetingView component to manage the meeting and chat functionality
function MeetingView() {
const [modalVisible, setModalVisible] = useState(false);
// Function to toggle the visibility of the modal
const toggleModal = () => {
setModalVisible(!modalVisible);
};
return (
<View style={{ flex: 1 }}>
{/* Other components for the meeting view */}
<ChatView modalVisible={modalVisible} toggleModal={toggleModal} />
<ControlsContainer
// other props, join, leave etc.
enableChat={() => {
toggleModal();
}}
/>
</View>
);
}
function ControlsContainer({ enableChat }) {
return (
// Container for control buttons
<View
style={{
padding: 24,
flexDirection: "row",
justifyContent: "space-between",
}}
>
{/* Button to enable chat */}
<Button
onPress={() => {
enableChat();
}}
buttonText={"Chat"}
backgroundColor={"#1178F8"}
/>
</View>
);
}
Step 2: Add React Native Modal component to handle chat functionality.
import {
TextInput,
Modal,
Pressable,
} from "react-native";
import {
usePubSub,
} from "@videosdk.live/react-native-sdk";
// ChatView component for displaying chat messages and input
function ChatView({ modalVisible, toggleModel }) {
// Destructure publish method from usePubSub hook
const { publish, messages } = usePubSub("CHAT");
// State to store the user typed message
const [message, setMessage] = useState("");
// Function to handle sending messages
const handleSendMessage = () => {
// Publish the message using the publish method
publish(message, { persist: true });
// Clear the message input after sending
setMessage("");
};
return (
<View
style={{
flex: 1,
justifyContent: "center",
alignItems: "center",
marginTop: 22,
}}
>
<Modal animationType="slide" visible={modalVisible}>
<SafeAreaView
style={{
flex: 1,
backgroundColor: "#050A0E",
justifyContent: "space-between",
}}
>
<Pressable
style={{
height: 40,
aspectRatio: 1,
backgroundColor: "#5568FE",
justifyContent: "center",
alignItems: "center",
borderRadius: 24,
marginTop: 12,
marginLeft: 12,
}}
onPress={toggleModel}
>
<Text style={{ fontWeight: "bold", fontSize: 24 }}>X</Text>
</Pressable>
<View>
{/* Render chat messages */}
{messages.map((message) => {
return (
<Text
style={{
fontSize: 12,
color: "#FFFFFF",
marginVertical: 8,
marginHorizontal: 12,
}}
>
{message.senderName} says {message.message}
</Text>
);
})}
<View
style={{
paddingHorizontal: 12,
}}
>
{/* Render text input container */}
<TextInputContainer
message={message}
setMessage={setMessage}
sendMessage={handleSendMessage}
/>
</View>
</View>
</SafeAreaView>
</Modal>
</View>
);
}
Step 3: Implement the TextInputContainer component for inputting and sending messages.
// TextInputContainer component for inputting and sending messages
function TextInputContainer({ sendMessage, setMessage, message }) {
// Function to render the text input UI
const textInput = () => {
return (
<View
style={{
height: 40,
marginBottom: 14,
flexDirection: "row",
borderRadius: 10,
backgroundColor: "#404B53",
}}
>
<View
style={{
flexDirection: "row",
flex: 2,
justifyContent: "center",
alignItems: "center",
}}
>
{/* TextInput for typing messages */}
<TextInput
multiline
value={message}
placeholder={"Write your message"}
style={{
flex: 1,
color: "white",
marginLeft: 12,
margin: 4,
padding: 4,
}}
numberOfLines={2}
onChangeText={setMessage}
selectionColor={"white"}
placeholderTextColor={"#9FA0A7"}
/>
</View>
<View
style={{
justifyContent: "center",
alignItems: "center",
backgroundColor: message.length > 0 ? "#5568FE" : "transparent",
margin: 4,
padding: 4,
borderRadius: 8,
}}
>
{/* Button to send the message */}
<TouchableOpacity
onPress={sendMessage}
style={{
height: 40,
aspectRatio: 1,
justifyContent: "center",
alignItems: "center",
paddingVertical: 8,
paddingVertical: 4,
}}
>
<Text>Send</Text>
</TouchableOpacity>
</View>
</View>
);
};
return <>{textInput()}</>;
}
Private Chat
In the following example, to convert the chat into a private conversation between two participants, you can set the sendOnly
property. This property ensures that messages are only sent to the intended recipient, making the chat interaction exclusive and private between the two users involved.
import { SafeAreaView, TouchableOpacity, TextInput, Text } from "react-native";
function ChatView() {
// destructure publish method from usePubSub hook
const { publish, messages } = usePubSub("CHAT");
// State to store the user typed message
const [message, setMessage] = useState("");
const handleSendMessage = () => {
// Sending the Message using the publish method
// Pass the participantId of the participant to whom you want to send the message.
// hightlight-next-line
publish(message, { persist: true, sendOnly: ['XYZ'] });
// Clearing the message input
setMessage("");
};
//...
}
Downloading Chat Messages
All the messages from PubSub published with persist : true
can be downloaded as an .csv
file. This file will be accessible in the VideoSDK dashboard and through the Sessions API.
Conclusion
Congratulations!, you have integrated Screen Sharing successfully and unlocked the full potential of real-time communication, enabling users to engage in seamless conversations during video calls. Integrating a chat feature into a React Native video call app using VideoSDK can significantly enhance the user experience.
To unlock the full potential of VideoSDK and create easy-to-use video experiences, developers are encouraged to Sign up with VideoSDK today and Get 10000 minutes free to take the video app to the next level!
Top comments (0)