DEV Community

Cover image for Introduction to AI-Empowered Image Segmentation
Jackson for HMS Core

Posted on

Introduction to AI-Empowered Image Segmentation

Image segmentation technology is gathering steam thanks to the development of multiple fields. Take the autonomous vehicle as an example, which has been developing rapidly since last year and become a showpiece for both well-established companies and start-ups. Most of them use computer vision, which includes image segmentation, as the technical basis for self-driving cars, and it is image segmentation that allows a car to understand the situation on the road and to tell the road from the people.
Image segmentation is not only applied to autonomous vehicles, but is also used in a number of different fields, including:

  • Medical imaging, where it helps doctors make diagnosis and perform tests
  • Satellite image analysis, where it helps analyze tons of data
  • Media apps, where it cuts people from video to prevent bullet comments from obstructing them

It is a widespread application. I myself am also a fan of this technology. Recently, I've tried an image segmentation service from HMS Core ML Kit, which I found outstanding. This service has an original framework for semantic segmentation, which labels each and every pixel in an image, so the service can clearly, completely cut out something as delicate as a hair. The service also excels at processing images with different qualities and dimensions. It uses algorithms of structured learning to prevent white borders — which is a common headache of segmentation algorithms — so that the edges of the segmented image appear more natural.

I'm delighted to be able to share my experience of implementing this service here.

Preparations

First, configure the Maven repository and integrate the SDK of the service. I followed the instructions here to complete all these.
1.Configure the Maven repository address

buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
}
}

allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
Enter fullscreen mode Exit fullscreen mode

2.Add build dependencies

dependencies {
    // Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-segmentation:2.1.0.301'
    // Import the package of the human body segmentation model.
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-body-model:2.1.0.303'
}
Enter fullscreen mode Exit fullscreen mode
  1. Add the permission in the AndroidManifest.xml file.
// Permission to write to external storage.
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
Enter fullscreen mode Exit fullscreen mode

Development Procedure

1.Dynamically request the necessary permissions

@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_main);
    if (!allPermissionsGranted()) {
        getRuntimePermissions();
    }
}
private boolean allPermissionsGranted() {
    for (String permission : getRequiredPermissions()) {
        if (!isPermissionGranted(this, permission)) {
            return false;
        }
    }
    return true;
}
private void getRuntimePermissions() {
    List<String> allNeededPermissions = new ArrayList<>();
    for (String permission : getRequiredPermissions()) {
        if (!isPermissionGranted(this, permission)) {
            allNeededPermissions.add(permission);
        }
    }
    if (!allNeededPermissions.isEmpty()) {
        ActivityCompat.requestPermissions(
                this, allNeededPermissions.toArray(new String[0]), PERMISSION_REQUESTS);
    }
}
private static boolean isPermissionGranted(Context context, String permission) {
    if (ContextCompat.checkSelfPermission(context, permission) == PackageManager.PERMISSION_GRANTED) {
        return true; 
    }
    return false; 
}
private String[] getRequiredPermissions() { 
    try {
        PackageInfo info =
                this.getPackageManager()
                        .getPackageInfo(this.getPackageName(), PackageManager.GET_PERMISSIONS); 
        String[] ps = info.requestedPermissions;
        if (ps != null && ps.length > 0) { 
            return ps;
        } else { 
            return new String[0];
        } 
    } catch (RuntimeException e) {
        throw e; 
    } catch (Exception e) { 
        return new String[0];
    }
}
Enter fullscreen mode Exit fullscreen mode

2.Create an image segmentation analyzer

MLImageSegmentationSetting setting = new MLImageSegmentationSetting.Factory()
// Set the segmentation mode to human body segmentation.
        .setAnalyzerType(MLImageSegmentationSetting.BODY_SEG)
        .create();
this.analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(setting);
Enter fullscreen mode Exit fullscreen mode

3.Use android.graphics.Bitmap to create an MLFrame object for the analyzer to detect images.

MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
Enter fullscreen mode Exit fullscreen mode
  1. Call asyncAnalyseFrame for image segmentation
// Create a task to process the result returned by the analyzer.
Task<MLImageSegmentation> task = this.analyzer.asyncAnalyseFrame(mlFrame);
// Asynchronously process the result returned by the analyzer.
task.addOnSuccessListener(new OnSuccessListener<MLImageSegmentation>() {
    @Override
    public void onSuccess(MLImageSegmentation mlImageSegmentationResults) {.
        if (mlImageSegmentationResults != null) {
// Obtain the human body segment cut out from the image.
            foreground = mlImageSegmentationResults.getForeground();
            preview.setImageBitmap(MainActivity.this.foreground);
        }
    }
}).addOnFailureListener(new OnFailureListener() {
    @Override
    public void onFailure(Exception e) {
        return;
    }
});
Enter fullscreen mode Exit fullscreen mode

5.Change the image background

// Obtain an image from the album.
backgroundBitmap = Utils.loadFromPath(this, id, targetedSize.first, targetedSize.second);
BitmapDrawable drawable = new BitmapDrawable(backgroundBitmap);
preview.setBackground(drawable);
preview.setImageBitmap(this.foreground);
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
Enter fullscreen mode Exit fullscreen mode

Result

Image segmentation

References​

For more details, you can go to:
ML Kit official website
ML Kit Development Documentation page, to find the documents you need
Reddit to join our developer discussion

Top comments (0)