DEV Community

Srijeyanthan
Srijeyanthan

Posted on

Sending M-JPEG streaming over HTTP

The M-JPEG called motion JPEG is a video compression method in which each video frame is compressed separately as a JPEG image. M-JPEG is now widely used by most of the video-capture devices such as digital cameras, IP cameras, and webcams. On the other side, clients such as browsers, VLC, QuickTime players are supporting to play video.

Sending M-JPEG over the network has slightly higher overhead than other advanced compression technologies like H.264 due to individual frame encoding. For example, H.264 is mainly focusing consecutive frames to compress, but then, the client needs to wait until it receives all the necessary frames to reconstruct the video. That is why M-JPEG is faster sending it if you have really great bandwidth network.

This article explains, how developers can send M-JPEG over HTTP after they processed ( do some image processing). HTTP doesn't have any methods to send streaming data, we will have to use the multi-part sending procedure to stream video.

Let's say, we need to stream webcam video through your own web server,

Note: Actual web server needs nonblocking I/O implementation to handle multiple clients.

#include <opencv2/core.hpp>
#include <opencv2/videoio.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/tracking.hpp>

#include <iostream>
#include <stdio.h>
using namespace cv;
using namespace std;


#define PROT_CRLF   "\x0D\x0A"

int main(int, char**)
{
    Mat frame;

    VideoCapture cap;

    int deviceID = 0;             
    int apiID = cv::CAP_ANY;     

    cap.open(deviceID + apiID);
    // check if we succeeded
    if (!cap.isOpened()) {
        cerr << "ERROR! Unable to open camera\n";
        return -1;
    }

    cout << "Start grabbing" << endl
        << "Press any key to terminate" << endl;
    int iFrameCount=0;
    for (;;)
    {
        ++iFrameCount;
        cap.read(frame);

        if (frame.empty()) {
            cerr << "ERROR! blank frame grabbed\n";
            break;
        }

        imshow("Live", frame);

        //Update the tracking result with new frame
        multiTracker->update(frame);

      // Draw tracked objects
       for(unsigned i=0; i<multiTracker->getObjects().size(); i++)
      {
        rectangle(frame, multiTracker->getObjects()[i], colors[i], 2, 1);
      }


      std::stringstream ssFrameText;
      ssFrameText<<iFrame;
    cv::putText(img_resized, //target image
            ssFrameText.str().c_str(), //text
            cv::Point(20, img_resized.rows - 20), //top-left position
            cv::FONT_HERSHEY_COMPLEX_SMALL, 0.8, CV_RGB(118, 185, 0), //font color
            2);

       // Now we are ready to send the processed frame. 


    }

    return 0;
}

Now, we will send the processed frame over HTTP. The process is the following, first, we need to encode the OpenCV image using cv::imencode, this will result in the encoded raw buffer.

 void SendOverHTTP(Mat &frame)
 {

   std::vector<int> encode_params;
   encode_params.push_back(CV_IMWRITE_JPEG_QUALITY);
   encode_params.push_back(m_iImageQuality);

   std::vector<uchar> encoded_buffer;
   cv::imencode(".jpeg", frame, encoded_buffer, encode_params);

   if(bIsFirstFrame)
   {
       int iWriteLen = snprintf(zNetworkBuffer, 40, "%s%s%s", "Content-Type: ", "image/jpeg", PROT_CRLF);
       zNetworkBuffer += iWriteLen;
       // other headers also needed
   }
   else
   {
      iWriteLen = snprintf(zNetworkBuffer, 40, "%s%d%s%s", "Content-Length: ", encoded_buffer.size(), PROT_CRLF,PROT_CRLF);
      zNetworkBuffer += iWriteLen;
   }
 }

If you want to access the video through chrome, then sending raw M-JPEG won't work, as chrome stopped supporting direct streaming. Instead, we can embed the URL and send it to the client like the following.

<html>
<body>
<img src="http://127.0.0.1:9600?image_width=640&image_height=480"
</body>
</html>

Ready to stream :)

Oldest comments (0)