Since the move to HTML from Flash a few years back there is one single discussion that tends to appear in each and every player and analytics implementation project there is - the strange event flow that you get from an HTML video player.
Though it’s of course amazing to have it standardised and you more or less can do even all MSE player tracking implementation based on these, without the need of player engine specific APIs which rather may enhance on top of it, there are quite a few questions and issues that appears.
- Why is there not a Buffering event?
- Why can there be pause events triggered during seek?
- How to differ a resume from paused vs an initial play event?
The events that are triggered may make sense from a purely technical perspective, though the analytics that you want to sample should most often sample the experience the end user haves, and the behaviour that they brings.
Solution
The solution as such has always been fairly simple, though repetitive. You have to filter the events, putting them into the context of the current state of the player.
To avoid this repetetive task, we've built a small JavaScript library, Video Event Filter, which applies this kind of filtering on top of the video element events.
So instead of the technical approach, that the standard defines, as mentioned earlier - we do now filter and expose events according to what you want to get into your insights, according to our experiences.
The events that are exposed is now rather these
loading
-
loaded
, video have loaded, but not started -
play
, video have started to play pause
-
resume
, video have started to play after apause
seeking
-
seeked
, video is done seeking. Continue in the state that existed before. buffering
-
buffered
, video is done buffering. Continue in the state that existed before. timeupdate
ended
error
As you can see, and as mentioned we do now have proper terminology for when the player is loaded
, we will trigger resume
rather then play when the user continues after a pause. We do of course have proper buffering
and buffered
events.
Implementation
As we do only filter events on the video element, the implementation is as simple as if you would listen to the video element directly.
import { VideoEventFilter } from "@eyevinn/video-event-filter";
const videoElement = document.querySelector("video");
const videoEventFilter = new VideoEventFilter(videoElement);
videoEventFilter.addEventListener("*", (event, data) => {
console.log("EVENT:", event);
});
The events available, as mentioned above, are exposed in a variable PlayerEvents
for simpler implementation.
Conclusion
This means that we can now track a playback session in a way that reflects the experience from the end user perspective (when it comes to buffering ratio etc) as well as the actual behavior of the user - rather than what the player engine technically do (which of course may be valuable in other contexts).
As mentioned this is a quite small library and a fairly simple task, though repetetive. Abstracting this into a library means that we can focus on building value, we can point at the issues there is with analytics out of the box and not least iterate our opinion based filtering centralized over time and evolve how a playback session should be measured.!
Top comments (0)