DEV Community

Cover image for Motion Detection In OpenCV Explained In-Depth
Esther
Esther

Posted on

Motion Detection In OpenCV Explained In-Depth

TLDR:

  • Background are the pixels in a frame that remain static over time.
  • Foreground are the pixels in a frame that keep changing.
  • For each frame, a log of the pixels are kept.
  • To detect motion, we compare the current pixels in the current frame with their history.
  • If there is a massive change in intensity, we can safely call it motion detection.

Background subtraction is a technique used in computer vision to identify moving objects in a video by separating them from the background, basically subtracting an object from the background so they can be tracked independently.

As you probably know, frames are individual pictures or images in a video. A video is made up of many frames shown quickly, one after the other, to create the illusion of movement. Think of frames like pages in a flipbook. When you flip through them fast, they make an animated story.

In background subtraction, each frame of the video is compared to a background model (a static reference image of the scene created at different points in time). Any significant difference between the current frame being shown and the background model is considered as foreground, thus indicating motion or change.

To achieve this, the subtraction approach being used in OpenCV is K-nearest neighbors (KNN). This approach classifies each pixel as background or foreground by looking at the color values of its nearest neighbors (K) in a certain time window (history). If the nearest neighbours are below a certain threshold (which represents the "closeness" you accept), then the pixel is considered to be similar to its historical values, and will be classified as background.
If the distance is large, the pixel will be classified as foreground.

For example, if a pixel has been black (0) for the last 399 frames and suddenly turns white (255) in the current frame:

  1. The algorithm checks the nearest neighbors (the number of nearest neighbors is decided internally by the algorithm) from the 400-frame history.
  2. If all the nearest neighbors are black, the current white pixel will likely be classified as foreground because it's too different from the background model and thus motion is detected through a change in the pixels.

The OpenCV function looks like this:

retval = cv2.createBackgroundSubtractorKNN([, history[, dist2Threshold[, detectShadows]]])

How KNN Works for Background Subtraction:

  1. Keeping a history: For every single pixel in the frame, KNN maintains a history of its previous pixel values. Imagine every single pixel has an array of historical values from past frames. This history acts as a model of what that pixel's intensity should look like if it were part of the background.

  2. Comparing current frame with history: When a new frame is captured, the KNN algorithm checks the intensity value of each pixel and compares it with the stored history of that pixel.

In summary, background subtraction is a simple and effective way to identify moving objects in a scene by comparing each frame to a model of the static background.

Top comments (0)