WebRTC is an open-source project which makes it possible to add real-time communication features (e.g. live video calls) directly into browser applications and websites. Often used by different industries such as banking & finance, healthcare, and education, it’s a set of JavaScript APIs for easy integration without having to deal with the inherent complexities of requiring downloads or plugins to use them. However, there are a few capabilities that are not natively available in the WebRTC stack such as recording. To embed the recording function, many developers would typically use one of the following methods depending on the technology stack used for hosting the WebRTC application and the skillsets of the developers
- Full-stack developers who are building applications using the native browser API tend to wrap one of the many browser-side screen recording plugins in their application. This works as a prototyping solution. At best, it is a way to circumvent the problem rather than providing a clean and robust solution that is tightly integrated into the end-user application.
- Mobile application developers building native mobile applications on top of the WebRTC stack have few choices to record the session. They would often run another recording application in the background to record the session. These quick fixes and workaround solutions are prone to cyber hacks and numerous usability issues.
- Developers could also leverage WebRTC or CPaaS providers. However, there are not many such service providers that offer APIs that help in recording. For those who do, the features and architecture for recording video sessions differ. In general, there are two broad mechanisms – server-side recording and client-side recording. Let’s look at them in more detail
Server-side or Client-side recording
Server-side recording
For server-side recording, the media is routed via a media server instead of directly between the browsers. In this case, the WebRTC session is terminated over the severs on both ends with the media routed to the receiving end. The decoded media is then simultaneously sent to post-processing and recording. Service providers with server-side recording APIs allow the developers to do the following:
- Recording video/audio stream for each participant in the WebRTC session. These sessions could either be one-to-one or multi-party.
- Mixing and transcoding all participants’ streams into a single composite video file.
- Providing lay-outing API to manage the recording content layout.
- Additionally, there can be additional features such as integrating chat with recording, watermarking, etc. which are often required for recreating the session as it happened and for copyrighting
Client-side recording
For client-side recording, videos are recorded locally, and then processed before uploading to the servers. In this case, an additional client endpoint is needed to connect to the WebRTC session. This end-point is expected to install a software binary containing the recording software image. The software could either be offered as a docker image or a simple ISO. One distinctive disadvantage is that you can’t control the client’s endpoint and its performance greatly varies according to the endpoint specifications. Large scale sessions require a high-end machine with a fast disk I/O access rate and a fast CPU.
So, what’s best?
When deciding whether to go for application-level recording or the recording capability offered by a WebRTC platform provider (i.e. Server-side or client-side recording), several factors need to be considered.
Supporting Endpoints
- Browsers: You need to find the relevant screen recording plugins suitable for the specific browser. If you need it to be on all browsers, which happens in most cases, you will end up having multiple plugins for each of the browsers.
- Mobile App: If you are developing native mobile or hybrid applications (i.e. REACT native etc) then it is recommended to go for a platform provider
Size of the Session
- Multiparty Sessions
In a multiparty session, the probability of users not having enough bandwidth to receive all of the participants’ video streams, is high. In realistic cases, some of the participants’ video quality will be below. As a result, video recording quality, if done solely at the client end will also have lower quality. For such cases, it is recommended to go for a service provider offering server-side recording
- One-to-one Sessions
Since bandwidth may not be a challenge here, you can choose client-side or server-side recording depending on whether you prefer installing a recording server at your end and dealing with all recording management issues.
- Concurrent Sessions
For multiple concurrent sessions, load balancing is required. In such cases, it is recommended to go for server-side recording to avoid dealing with the complexity of load balancing and recording management in your application layer.
Session Recording is a “Must-Have” Requirement
If it is required that the session must be recorded for legal or business process rules, it is recommended to go for server-side recording as it is least prone to errors.
In summary, you should carefully weigh all the above options before settling down on the final solution when adding recording capability to your WebRTC-enabled application.
Try our APIs Now!
Top comments (0)