DEV Community

Cover image for Using Motion Capture to Animate a Model
Jackson for HMS Core

Posted on

Using Motion Capture to Animate a Model

It's so rewarding to set the model you've created into motion. If only there were an easy way to do this… well, actually there is!

I had long sought out this kind of solution, and then voila! I got my hands on motion capture, a capability from HMS Core 3D Modeling Kit, which comes with technologies like human body detection, model acceleration, and model compression, as well as a monocular human pose estimation algorithm from the deep learning perspective.

Crucially, this capability does NOT require advanced devices — a mobile phone with an RGB camera is good enough on its own. The camera captures 3D data from 24 key skeletal points on the body, which the capability uses to seamlessly animate a model.

What makes the motion capture capability even better is its straightforward integration process, which I'd like to share with you.

Application Scenarios

Motion capture is ideal for 3D content creation for gaming, film & TV, and healthcare, among other similar fields. It can be used to animate characters and create videos for user generated content (UGC) games, animate virtual streamers in real time, and provide injury rehab, to cite just a few examples.

Integration Process

Preparations

Refer to the official instructions to complete all necessary preparations.

Configuring the Project

Before developing the app, there are a few more things you'll need to do: Configure app information in AppGallery Connect; make sure that the Maven repository address of the 3D Modeling SDK has been configured in the project, and that the SDK has been integrated.
1.Create a motion capture engine

// Set necessary parameters as needed.  
Modeling3dMotionCaptureEngineSetting setting = new Modeling3dMotionCaptureEngineSetting.Factory() 
    // Set the detection mode.  
    // Modeling3dMotionCaptureEngineSetting.TYPE_3DSKELETON_QUATERNION: skeleton point quaternions of a human pose.  
    // Modeling3dMotionCaptureEngineSetting.TYPE_3DSKELETON: skeleton point coordinates of a human pose.  
.setAnalyzeType(Modeling3dMotionCaptureEngineSetting.TYPE_3DSKELETON_QUATERNION 
                        | Modeling3dMotionCaptureEngineSetting.TYPE_3DSKELETON) 
.create(); 
Modeling3dMotionCaptureEngine engine = Modeling3dMotionCaptureEngineFactory.getInstance().getMotionCaptureEngine(setting);
Enter fullscreen mode Exit fullscreen mode

Modeling3dFrame encapsulates video frame or static image data sourced from a camera, as well as related data processing logic.

Customize the logic for processing the input video frames, to convert them to the Modeling3dFrame object for detection. The video frame format can be NV21.

Use android.graphics.Bitmap to convert the input image to the Modeling3dFrame object for detection. The image format can be JPG, JPEG, or PNG.

// Create a Modeling3dFrame object using a bitmap.  
Modeling3dFrame frame = Modeling3dFrame.fromBitmap(bitmap); 
// Create a Modeling3dFrame object using a video frame.  
Modeling3dFrame.Property property = new Modeling3dFrame.Property.Creator().setFormatType(ImageFormat.NV21) 
    // Set the frame width.  
    .setWidth(width) 
    // Set the frame height.  
    .setHeight(height) 
    // Set the video frame rotation angle.  
    .setQuadrant(quadant) 
    // Set the video frame number.  
    .setItemIdentity(framIndex) 
    .create(); 
Modeling3dFrame frame = Modeling3dFrame.fromByteBuffer(byteBuffer, property);
Enter fullscreen mode Exit fullscreen mode

2.Call the asynchronous or synchronous API for motion detection
Sample code for calling the asynchronous API asyncAnalyseFrame

Task<List<Modeling3dMotionCaptureSkeleton>> task = engine.asyncAnalyseFrame(frame); 
task.addOnSuccessListener(new OnSuccessListener<List<Modeling3dMotionCaptureSkeleton>>() { 
    @Override 
    public void onSuccess(List<Modeling3dMotionCaptureSkeleton> results) { 
        // Detection success.  
    } 
}).addOnFailureListener(new OnFailureListener() { 
    @Override 
    public void onFailure(Exception e) { 
        // Detection failure.  
    } 
});
Enter fullscreen mode Exit fullscreen mode

Sample code for calling the synchronous API analyseFrame

SparseArray<Modeling3dMotionCaptureSkeleton> sparseArray = engine.analyseFrame(frame); 
for (int i = 0; i < sparseArray.size(); i++) { 
    // Process the detection result.  
};
Enter fullscreen mode Exit fullscreen mode

3.Stop the motion capture engine to release detection resources, once the detection is complete

try { 
    if (engine != null) { 
        engine.stop(); 
    } 
} catch (IOException e) { 
    // Handle exceptions.  
}
Enter fullscreen mode Exit fullscreen mode

Result

Result

References

3D Modeling Kit Official Website
3D Modeling Kit Development Guide
Reddit for discussion with other developers
GitHub for demos and sample codes
Stack Overflow for solutions to integration issues
Follow our official account for the latest HMS Core-related news and updates.

Top comments (0)