DEV Community

Cover image for An Overview of Static Biometric Verification
Jackson for HMS Core

Posted on

An Overview of Static Biometric Verification

Static biometric verification is a feature of HMS Core ML Kit, which captures faces in real time and can determine whether a face belongs to a real person or not, without prompting the user to move their head or face. In this way, the service helps deliver a convenient user experience that wins positive feedback.

Technical Principles

Static biometric verification requires an RGB camera and is able to differentiate between a real person's face and a spoof attack (such as an image or screenshot of a face and a face mask), through details (such as the moiré pattern or reflection on a paper photo) in the image captured by the camera. The service supports data from a wide array of scenarios, including different lighting conditions, face accessories, genders, hairstyles, and mask materials. The service analyzes a face's surroundings to detect suspicious environments.

Static biometric verification
The static biometric verification model adopts the lightweight convolutional module, and linear computation is converted to a single convolutional module or a fully connected layer in the inference phase through reparameterization. The MindSpore Lite inference framework is used for model deployment, which crops operators. The model's package size is then shrunk, making it more convenient for integration.

Application Scenarios

Liveness detection is usually used before face verification. For example, when a user uses facial recognition to unlock their phone, liveness detection first determines whether the captured face is real or not. If yes, face verification will then check whether the face matches the one recorded in the system. These two technologies complement one another to protect a user's device from unauthorized access.

So it's safe to say that static biometric verification provides rigid protection for our apps, and I'm here to illustrate how this can be integrated.

Integration Procedure

The detailed preparations are all provided in the official document for the service.
Two modes are available to call the service:

Call Mode Liveness Detection Process Liveness Detection UI Function
Default View Mode Processed by ML Kit Provided by ML Kit Determines whether a face is real or not
Customized View Mode Processed by ML Kit Custom Determines whether a face is real or not

Default View Mode
1.Create a callback to obtain the static biometric verification result

private MLLivenessCapture.Callback callback = new MLLivenessCapture.Callback() {
    public void onSuccess(MLLivenessCaptureResult result) {
        // Callback when verification is successful. The result indicates whether the face is of a real person.

    public void onFailure(int errorCode) {
        // Callback when verification fails. For example, the camera is abnormal (CAMERA_ERROR). Add the processing logic to deal with the failure.

Enter fullscreen mode Exit fullscreen mode

2.Create a static biometric verification instance and start verification

MLLivenessCapture capture = MLLivenessCapture. getInstance();
capture.startDetect(activity, callback);

Enter fullscreen mode Exit fullscreen mode

Customized View Mode
1.Create an MLLivenessDetectView instance and load it to the activity layout

* i. Bind the camera preview screen to the remote view and set the liveness detection area.
* In the camera preview stream, static biometric verification determines whether a face is in the middle of the image. To improve the pass rate, you are advised to place the face frame in the middle of the screen and set the liveness detection area to be slightly larger than the face frame.
* ii. Set whether to detect the mask.
* iii. Set the result callback.
* iv. Load MLLivenessDetectView to the activity.
protected void onCreate(Bundle savedInstanceState) {
    mPreviewContainer = findViewById(;
    // ObtainLLivenessDetectView
mlLivenessDetectView = new MLLivenessDetectView.Builder()
        // Set whether to detect the mask.
        .setOptions(MLLiveness DetectView.DETECT_MASK)
        // Set the rectangle of the face frame relative to MLLivenessDetectView.
        .setFaceRect(new Rect(0, 0, 0, 200))
        // Set the result callback.
        .setDetectCallback(new OnMLLivenessDetectCallback() {
            public void onCompleted(MLLivenessCaptureResult result) {
                // Callback when verification is complete.

            public void onError(int error) {
                // Callback when an error occurs during verification.

            public void onInfo(int infoCode, Bundle bundle) {
                // Callback when the verification prompt message is received. This message can be displayed on the UI.
                // if(infoCode==MLLivenessDetectInfo.NO_FACE_WAS_DETECTED){
                     // No face is detected.
                // }
                // ...

            public void onStateChange(int state, Bundle bundle) {
                // Callback when the verification status changes.
                // if(state==MLLivenessDetectStates.START_DETECT_FACE){
                     // Start face detection.
                // }
                // ...
Enter fullscreen mode Exit fullscreen mode

2.Set a lifecycle listener for MLLivenessDetectView

protected void onDestroy() {

protected void onPause() {

protected void onResume() {

protected void onStart() {

protected void onStop() {

Enter fullscreen mode Exit fullscreen mode


For more details, you can go to:
ML Kit official website
ML Kit Development Documentation page, to find the documents you need
Reddit to join our developer discussion

Top comments (0)