π Introduction
Integrating image capture into a React JS video call app enhances user experience and functionality. With the Image Capturer API from VideoSDK, you can effortlessly empower your app users to capture high-quality images during video calls, adding a dynamic dimension to their interactions.
Implementing this feature is seamless within your React JS application. By following the provided documentation and integrating the Image Capturer component, users can easily capture snapshots during their video conversations with just a click. Whether it's for preserving memorable moments or sharing essential information visually, this functionality enriches the overall user experience.
Benefits of Image Capture:
- Enhanced Communication: Image Capture enables users to express themselves more vividly during video calls, fostering richer communication.
- Memorable Moments: Users can capture memorable moments during video conversations, preserving them as images for future reference or sharing.
- Visual Information Sharing: Image snapshots allow users to convey complex information visually, making it easier to share ideas, documents, or diagrams.
- Increased Engagement: The ability to capture images adds interactivity to video calls, keeping participants engaged and attentive.
- Convenience: With a simple click, users can capture images without leaving the video call interface, ensuring a seamless experience.
Use Cases of Image Capture:
- Education: Students can capture whiteboard content or diagrams shared during online classes for later review.
- Business Meetings: Participants can capture key points discussed in meetings or presentations, ensuring clarity and accountability.
- Remote Collaboration: Teams working remotely can capture design mockups, charts, or code snippets for collaborative brainstorming sessions.
- Personal Communication: Friends and family members can capture fun moments or important information shared during video calls, preserving memories.
π Getting Started with VideoSDK
To take advantage of the chat functionality, we must use the capabilities that the VideoSDK offers. Before diving into the implementation steps, let's ensure you complete the necessary prerequisites.
Create a VideoSDK Account
Go to your VideoSDK dashboard and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.
Generate your Auth Token
Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features.
For a more visual understanding of the account creation and token generation process, consider referring to the provided tutorial.
Prerequisites and Setup
Before proceeding, ensure that your development environment meets the following requirements:
- VideoSDK Developer Account (Not having one?, follow VideoSDK Dashboard)
- Basic understanding of React.
- React VideoSDK
- Make sure Node and NPM are installed on your device.
- Basic understanding of Hooks (useState, useRef, useEffect)
- React Context API (optional)
Follow the steps to create the environment necessary to add video calls to your app. You can also find the code sample for Quickstart here.β
Create a new React App using the below command.
$ npx create-react-app videosdk-rtc-react-app
π οΈ Install VideoSDKβ
It is necessary to set up VideoSDK within your project before going into the details of integrating the Screen Share feature. Installing VideoSDK using NPM or Yarn will depend on the needs of your project.
- For NPM
$ npm install "@videosdk.live/react-sdk"
//For the Participants Video
$ npm install "react-player"
- For Yarn
$ yarn add "@videosdk.live/react-sdk"
//For the Participants Video
$ yarn add "react-player"
You are going to use functional components to leverage React's reusable component architecture. There will be components for users, videos and controls (mic, camera, leave) over the video.
App Architecture
The App will contain a MeetingView
component which includes a ParticipantView
component which will render the participant's name, video, audio, etc. It will also have a Controls
component that will allow the user to perform operations like leave and toggle media.
You will be working on the following files:
- API.js: Responsible for handling API calls such as generating unique meetingId and token
- App.js: Responsible for rendering
MeetingView
and joining the meeting.
π₯ Essential Steps to Implement Video Calling Functionality
To add video capability to your React application, you must first complete a sequence of prerequisites.
Step 1: Get started with API.jsβ
Before moving on, you must create an API request to generate a unique meetingId. You will need an authentication token, which you can create either through the videosdk-rtc-api-server-examples or directly from the VideoSDK Dashboard for developers.
//This is the Auth token, you will use it to generate a meeting and connect to it
export const authToken = "<Generated-from-dashbaord>";
// API call to create a meeting
export const createMeeting = async ({ token }) => {
const res = await fetch(`https://api.videosdk.live/v2/rooms`, {
method: "POST",
headers: {
authorization: `${authToken}`,
"Content-Type": "application/json",
},
body: JSON.stringify({}),
});
//Destructuring the roomId from the response
const { roomId } = await res.json();
return roomId;
};
Step 2: Wireframe App.js with all the componentsβ
To build up a wireframe of App.js, you need to use VideoSDK Hooks and Context Providers. VideoSDK provides MeetingProvider, MeetingConsumer, useMeeting, and useParticipant hooks.
First, you need to understand the Context Provider and Consumer. Context is primarily used when some data needs to be accessible by many components at different nesting levels.
-
MeetingProvider : This is the Context Provider. It accepts value
config
andtoken
as props. The Provider component accepts a value prop to be passed to consuming components that are descendants of this Provider. One Provider can be connected to many consumers. Providers can be nested to override values deeper within the tree. - MeetingConsumer : This is the Context Consumer. All consumers that are descendants of a Provider will re-render whenever the Providerβs value prop changes.
- useMeeting : This is the meeting hook API. It includes all the information related to meetings such as join/leave, enable/disable the mic or webcam, etc.
- useParticipant : This is the participant hook API. It is responsible for handling all the events and props related to one particular participant such as name , webcamStream , micStream , etc.
The Meeting Context provides a way to listen for any changes that occur when a participant joins the meeting or makes modifications to their microphone, camera, and other settings.
Begin by making a few changes to the code in the App.js file.
import "./App.css";
import React, { useEffect, useMemo, useRef, useState } from "react";
import {
MeetingProvider,
MeetingConsumer,
useMeeting,
useParticipant,
} from "@videosdk.live/react-sdk";
import { authToken, createMeeting } from "./API";
import ReactPlayer from "react-player";
function JoinScreen({ getMeetingAndToken }) {
return null;
}
function ParticipantView(props) {
return null;
}
function Controls(props) {
return null;
}
function MeetingView(props) {
return null;
}
function App() {
const [meetingId, setMeetingId] = useState(null);
//Getting the meeting id by calling the api we just wrote
const getMeetingAndToken = async (id) => {
const meetingId =
id == null ? await createMeeting({ token: authToken }) : id;
setMeetingId(meetingId);
};
//This will set Meeting Id to null when meeting is left or ended
const onMeetingLeave = () => {
setMeetingId(null);
};
return authToken && meetingId ? (
<MeetingProvider
config={{
meetingId,
micEnabled: true,
webcamEnabled: true,
name: "C.V. Raman",
}}
token={authToken}
>
<MeetingView meetingId={meetingId} onMeetingLeave={onMeetingLeave} />
</MeetingProvider>
) : (
<JoinScreen getMeetingAndToken={getMeetingAndToken} />
);
}
export default App;
Step 3: Implement Join Screenβ
The join screen will serve as a medium to either schedule a new meeting or join an existing one.
function JoinScreen({ getMeetingAndToken }) {
const [meetingId, setMeetingId] = useState(null);
const onClick = async () => {
await getMeetingAndToken(meetingId);
};
return (
<div>
<input
type="text"
placeholder="Enter Meeting Id"
onChange={(e) => {
setMeetingId(e.target.value);
}}
/>
<button onClick={onClick}>Join</button>
{" or "}
<button onClick={onClick}>Create Meeting</button>
</div>
);
}
Output
Step 4: Implement MeetingView and Controlsβ
The next step is to create MeetingView
and Controls
components to manage features such as join, leave, mute, and unmute.
function MeetingView(props) {
const [joined, setJoined] = useState(null);
//Get the method which will be used to join the meeting.
//We will also get the participants list to display all participants
const { join, participants } = useMeeting({
//callback for when meeting is joined successfully
onMeetingJoined: () => {
setJoined("JOINED");
},
//callback for when meeting is left
onMeetingLeft: () => {
props.onMeetingLeave();
},
});
const joinMeeting = () => {
setJoined("JOINING");
join();
};
return (
<div className="container">
<h3>Meeting Id: {props.meetingId}</h3>
{joined && joined == "JOINED" ? (
<div>
<Controls />
//For rendering all the participants in the meeting
{[...participants.keys()].map((participantId) => (
<ParticipantView
participantId={participantId}
key={participantId}
/>
))}
</div>
) : joined && joined == "JOINING" ? (
<p>Joining the meeting...</p>
) : (
<button onClick={joinMeeting}>Join</button>
)}
</div>
);
}
function Controls() {
const { leave, toggleMic, toggleWebcam } = useMeeting();
return (
<div>
<button onClick={() => leave()}>Leave</button>
<button onClick={() => toggleMic()}>toggleMic</button>
<button onClick={() => toggleWebcam()}>toggleWebcam</button>
</div>
);
}
Output of Controls Component
Step 5: Implement Participant Viewβ
Before implementing the participant view, you need to understand a couple of concepts.
5.1 Forwarding Ref for mic and camera
The useRef
hook is responsible for referencing the audio and video components. It will be used to play and stop the audio and video of the participant.
const webcamRef = useRef(null);
const micRef = useRef(null);
5.2 useParticipant Hook
The useParticipant
hook is responsible for handling all the properties and events of one particular participant who joined the meeting. It will take participantId as an argument.
const { webcamStream, micStream, webcamOn, micOn } = useParticipant(
props.participantId
);
5.3 MediaStream API
The MediaStream API is beneficial for adding a MediaTrack to the audio/video tag, enabling the playback of audio or video.
const webcamRef = useRef(null);
const mediaStream = new MediaStream();
mediaStream.addTrack(webcamStream.track);
webcamRef.current.srcObject = mediaStream;
webcamRef.current
.play()
.catch((error) => console.error("videoElem.current.play() failed", error));
5.4 Implement ParticipantView
β
Now you can use both of the hooks and the API to create ParticipantView
function ParticipantView(props) {
const micRef = useRef(null);
const { webcamStream, micStream, webcamOn, micOn, isLocal, displayName } =
useParticipant(props.participantId);
const videoStream = useMemo(() => {
if (webcamOn && webcamStream) {
const mediaStream = new MediaStream();
mediaStream.addTrack(webcamStream.track);
return mediaStream;
}
}, [webcamStream, webcamOn]);
useEffect(() => {
if (micRef.current) {
if (micOn && micStream) {
const mediaStream = new MediaStream();
mediaStream.addTrack(micStream.track);
micRef.current.srcObject = mediaStream;
micRef.current
.play()
.catch((error) =>
console.error("videoElem.current.play() failed", error)
);
} else {
micRef.current.srcObject = null;
}
}
}, [micStream, micOn]);
return (
<div>
<p>
Participant: {displayName} | Webcam: {webcamOn ? "ON" : "OFF"} | Mic:{" "}
{micOn ? "ON" : "OFF"}
</p>
<audio ref={micRef} autoPlay playsInline muted={isLocal} />
{webcamOn && (
<ReactPlayer
//
playsinline // extremely crucial prop
pip={false}
light={false}
controls={false}
muted={true}
playing={true}
//
url={videoStream}
//
height={"300px"}
width={"300px"}
onError={(err) => {
console.log(err, "participant video error");
}}
/>
)}
</div>
);
}
πΈ Integrate Image Capture Feature
This capability proves particularly valuable in Video KYC scenarios, enabling the capture of images where users can hold up their identity for verification.
NOTE: The
captureImage()
function is supported from version0.0.79
onward.
Enhance thecaptureImage()
function by making the height and width parameters, which is optional from version0.0.81
onward.
captureImage()
β
- By using the
captureImage()
function of theuseParticipant
hook, you can capture an image of a local participant from their video stream. - You have the option to specify the desired height and width in the
captureImage()
function; however, these parameters are optional. If not provided, the VideoSDK will automatically use the dimensions of the local participant's webcamStream. - The
captureImage()
function returns the image in the form of abase64
string.
import { useMeeting, useParticipant } from "@videosdk.live/react-sdk";
const { localParticipant } = useMeeting();
const { webcamStream, webcamOn, captureImage } = useParticipant(
localParticipant.id
);
async function imageCapture() {
if (webcamOn && webcamStream) {
const base64 = await captureImage({ height: 400, width: 400 }); // captureImage will return base64 string
console.log("base64", base64);
} else {
console.error("Camera must be on to capture an image");
}
}
NOTE: You can only capture an image of the local participant. If you called
captureImage()
function on a remote participant, you will receive an error. To capture an image of a remote participant, refer to the documentation below.
How to capture an image of a remote participant?β
- Before proceeding, it's crucial to understand VideoSDK's temporary file storage system and the underlying pubsub mechanism.
- Here's a breakdown of the steps, using the names Participant A and Participant B for clarity:
Step 1: Initiate Image Capture Requestβ
- In this step, you have to first send a request to Participant B, whose image you want to capture, using Pubsub.
- To do that, you have to create a Pubsub topic called
IMAGE_CAPTURE
theParticipantView
Component.β - Here, you will be using the
sendOnly
property of thepublish()
method. Therefore, the request will be sent to that participant only.
import {usePubSub,useParticipant} from '@videosdk.live/react-sdk';
function ParticipantView({ participantId }) {
// create pubsub topic to send Request
const { publish } = usePubSub('IMAGE_CAPTURE');
const { isLocal } = useParticipant(participantId);
β
// send Request to participant
function sendRequest() {
// Pass the participantId of the participant whose image you want to capture
// Here, it will be Participant B's id, as you want to capture the the image of Participant B
publish("Sending request to capture image", { persist: false, sendOnly: [participantId] });
};
return (
<>
// other components
<button
style={{
position: 'absolute', backgroundColor: "#00000066", top: 10 , left:10
}}
onClick={async () => {
if (!isLocal) {
sendRequest();
}
}}
>
Capture Image
</button>
</>
);
}
β
ParticipantView.js
Step 2: Capture and Upload Fileβ
- To capture an image from the remote participant [Participant B], you have to create the
CaptureImageListener
component. When a participant receives an image capture request, this component uses thecaptureImage
function of theuseParticipant
hook to capture the image.
import { useFile } from '@videosdk.live/react-sdk';
β
const CaptureImageListner = ({ localParticipantId }) => {
β
const { captureImage } = useParticipant(localParticipantId);
β
// subscribe to receive request
usePubSub('IMAGE_CAPTURE', {
onMessageReceived: (message) => {
_handleOnImageCaptureMessageReceived(message);
},
});
β
const _handleOnImageCaptureMessageReceived = (message) => {
try {
if (message.senderId !== localParticipantId) {
// capture and store image when message received
captureAndStoreImage({ senderId: message.senderId });
}
} catch (err) {
console.log("error on image capture", err);
}
};
async function captureAndStoreImage({ senderId }) {
// capture image
const base64Data = await captureImage({height:400,width:400});
console.log('base64Data',base64Data);
}
return <></>;
};
export default CaptureImageListner;
CaptureImageListner.js
- The captured image is then stored in VideoSDK's temporary file storage system using the
uploadBase64File()
function of theuseFile
hook. This operation returns a uniquefileUrl
of the stored image.
const CaptureImageListner = ({ localParticipantId }) => {
const { uploadBase64File } = useFile();
async function captureAndStoreImage({ senderId }) {
// capture image
const base64Data = await captureImage({ height: 400, width: 400 });
const token = "<VIDEOSDK_TOKEN>";
const fileName = "myCapture.jpeg"; // specify a name for image file with extension
// upload image to videosdk storage system
const fileUrl = await uploadBase64File({ base64Data, token, fileName });
console.log("fileUrl", fileUrl);
}
//...
};
CaptureImageListner.js
- Next, the
fileUrl
is sent back to the participant who initiated the request using theIMAGE_TRANSFER
topic.
const CaptureImageListner = ({ localParticipantId }) => {
//...
// publish image Transfer
const { publish: imageTransferPublish } = usePubSub("IMAGE_TRANSFER");
async function captureAndStoreImage({ senderId }) {
//...
const fileUrl = await uploadBase64File({ base64Data, token, fileName });
imageTransferPublish(fileUrl, { persist: false, sendOnly: [senderId] });
}
//...
};
- Then the
CaptureImageListener
component has to be rendered within theMeetingView
component.
Step 3: Fetch and Display Imageβ
- To display a captured image, the
ShowImage
component is used. Here's how it works: - Within
ShowImage
, you need to subscribe to theIMAGE_TRANSFER
topic, receiving thefileUrl
associated with the captured image. Once obtained, leverage thefetchBase64File()
function from theuseFile
hook to retrieve the file inbase64
format from VideoSDK's temporary storage.
import {
usePubSub,
useMeeting,
useFile
} from '@videosdk.live/react-sdk';
import { useState } from "react";
function ShowImage() {
const mMeeting = useMeeting();
const { fetchBase64File } = useFile();
β
const topicTransfer = "IMAGE_TRANSFER";
β
const [imageSrc, setImageSrc] = useState(null);
const [open, setOpen] = useState(false);
β
usePubSub(topicTransfer, {
onMessageReceived: (message) => {
if (message.senderId !== mMeeting.localParticipant.id) {
fetchFile({ url: message.message }); // pass fileUrl to fetch the file
}
}
});
β
async function fetchFile({ url }) {
const token = "<VIDEOSDK_TOKEN>";
const base64 = await fetchBase64File({ url, token });
console.log("base64",base64); // here is your image in the form of base64
setImageSrc(base64);
setOpen(true);
}
}
ShowImage.js
- With the
base64
data in hand, you can now display the image in a modal. This seamless image presentation is integrated into theMeetingView
component.
import { Dialog, Transition } from "@headlessui/react";
import { Fragment } from "react";
function ShowImage() {
//...
return (
<>
{imageSrc && (
<Transition appear show={open} as={Fragment}>
<Dialog as="div" className="relative z-10" onClose={() => {}}>
<Transition.Child
as={Fragment}
enter="ease-out duration-300"
enterFrom="opacity-0"
enterTo="opacity-100"
leave="ease-in duration-200"
leaveFrom="opacity-100"
leaveTo="opacity-0"
>
<div className="fixed inset-0 bg-black/25" />
</Transition.Child>
<div className="fixed inset-0 overflow-y-auto">
<div className="flex min-h-full items-center justify-center p-4 text-center">
<Transition.Child
as={Fragment}
enter="ease-out duration-300"
enterFrom="opacity-0 scale-95"
enterTo="opacity-100 scale-100"
leave="ease-in duration-200"
leaveFrom="opacity-100 scale-100"
leaveTo="opacity-0 scale-95"
>
<Dialog.Panel className="w-full max-w-md transform overflow-hidden rounded-2xl bg-gray-750 p-4 text-left align-middle shadow-xl transition-all">
<Dialog.Title
as="h3"
className="text-lg font-medium leading-6 text-center text-gray-900"
>
Image Preview
</Dialog.Title>
<div className="mt-8 flex flex-col items-center justify-center">
{imageSrc ? (
<img
src={`data:image/jpeg;base64,${imageSrc}`}
width={300}
height={300}
/>
) : (
<div width={300} height={300}>
<p className=" text-white text-center">
Loading Image...
</p>
</div>
)}
<div className="mt-4 ">
<button
type="button"
className="rounded border border-white bg-transparent px-4 py-2 text-sm font-medium text-white hover:bg-gray-700"
onClick={() => {
setOpen(false);
}}
>
Okay
</button>
</div>
</div>
</Dialog.Panel>
</Transition.Child>
</div>
</div>
</Dialog>
</Transition>
)}
</>
);
}
ShowImage.js
function MeetingView() {
// ...
return (
<div>
// other components
<CaptureImageListner localParticipantId={localParticipant?.id} />
<ShowImage />
</div>
);
}
MeetingView.js
π Conclusion
In conclusion, integrating image capture functionality into a React JS video call app is a powerful enhancement that enriches user interaction and collaboration. This feature not only enhances the app's versatility but also fosters engagement and productivity among users. With intuitive UI components enabling easy photo capture and sharing, the app becomes more comprehensive and user-friendly.
It's important to keep in mind that VideoSDK offers a comprehensive and intuitive solution for image capture within React JS. Its wide array of features and straightforward integration process enables you to effortlessly incorporate this functionality into your application, propelling it towards greater heights of functionality and user engagement.
To unlock the full potential of VideoSDK and create easy-to-use video experiences, developers are encouraged to Sign up with VideoSDK today and Get 10000 minutes free to take the video app to the next level!
Top comments (0)