DEV Community

HMS Community
HMS Community

Posted on

Capture the Documents using Huawei ML Kit in Patient Tracking Android app (Kotlin) – Part 4

Image description
Introduction

In this article, we can learn how to correct the document position using Huawei ML Kit of Document Skew Correction feature. This service automatically identifies the location of a document in an image and adjusts the shooting angle to the angle facing the document, even if the document is tilted. This service is majorly used in daily life. For example, if you have captured any document, bank card, driving license etc. from the phone camera with an unfair position, then this feature will adjust the document angle and provides the perfect position.

So, I will provide a series of articles on this Patient Tracking App, in upcoming articles I will integrate other Huawei Kits.

If you are new to this application, follow my previous articles.

https://forums.developer.huawei.com/forumPortal/en/topic/0201902220661040078

https://forums.developer.huawei.com/forumPortal/en/topic/0201908355251870119

https://forums.developer.huawei.com/forumPortal/en/topic/0202914346246890032

Precautions

Ensure that the camera faces document, document occupies most of the image, and the boundaries of the document are in viewfinder.
The best shooting angle is within 30 degrees. If the shooting angle is more than 30 degrees, the document boundaries must be clear enough to ensure better effects.

Requirements

  1. Any operating system (MacOS, Linux and Windows).
  2. Must have a Huawei phone with HMS 4.0.0.300 or later.
  3. Must have a laptop or desktop with Android Studio, Jdk 1.8, SDK platform 26 and Gradle 4.6 and above installed.
  4. Minimum API Level 21 is required.
  5. Required EMUI 9.0.0 and later version devices.

How to integrate HMS Dependencies

  • First register as Huawei developer and complete identity verification in Huawei developers website, refer to register a Huawei ID.

  • Create a project in android studio, refer Creating an Android Studio Project.

  • Generate a SHA-256 certificate fingerprint.

  • To generate SHA-256 certificate fingerprint. On right-upper corner of android project click Gradle, choose Project Name > Tasks > android, and then click signingReport, as follows.

Image description

Note: Project Name depends on the user created name.

Image description

  • Enter SHA-256 certificate fingerprint and click Save button, as follows.

Image description

  • Click Manage APIs tab and enable ML Kit.

Image description

  • Add the below maven URL in build.gradle(Project) file under the repositories of buildscript, dependencies and allprojects, refer Add Configuration.
maven { url 'http://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'

Enter fullscreen mode Exit fullscreen mode
  • Add the below plugin and dependencies in build.gradle(Module) file.
apply plugin: 'com.huawei.agconnect'
// Huawei AGC
implementation 'com.huawei.agconnect:agconnect-core:1.5.0.300'
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-documentskew:2.1.0.300'
// Import the document detection/correction model package.
implementation 'com.huawei.hms:ml-computer-vision-documentskew-model:2.1.0.300'
Enter fullscreen mode Exit fullscreen mode
  • Now Sync the gradle.

  • Add the required permission to the AndroidManifest.xml file.

<uses-permission android:name="android.permission.CAMERA " />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
Enter fullscreen mode Exit fullscreen mode

Let us move to development

I have created a project on Android studio with empty activity let us start coding.

In the DocumentCaptureActivity.kt we can find the business logic.

class DocumentCaptureActivity : AppCompatActivity(), View.OnClickListener {

    private val TAG: String = DocumentCaptureActivity::class.java.getSimpleName()
    private var analyzer: MLDocumentSkewCorrectionAnalyzer? = null
    private var mImageView: ImageView? = null
    private var bitmap: Bitmap? = null
    private var input: MLDocumentSkewCorrectionCoordinateInput? = null
    private var mlFrame: MLFrame? = null
    var imageUri: Uri? = null
    var FlagCameraClickDone = false
    var fabc: ImageView? = null

    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_document_capture)

        findViewById<View>(R.id.btn_click).setOnClickListener(this)
        mImageView = findViewById(R.id.image_result)
        // Create the setting.
        val setting = MLDocumentSkewCorrectionAnalyzerSetting.Factory()
            .create()
        // Get the analyzer.
        analyzer = MLDocumentSkewCorrectionAnalyzerFactory.getInstance()
            .getDocumentSkewCorrectionAnalyzer(setting)
        fabc = findViewById(R.id.fab)
        fabc!!.setOnClickListener(View.OnClickListener {
            FlagCameraClickDone = false
            val gallery =  Intent(Intent.ACTION_PICK, MediaStore.Images.Media.EXTERNAL_CONTENT_URI)
            startActivityForResult(gallery, 1)
        })

    }

    override fun onClick(v: View?) {
        this.analyzer()
    }

    private fun analyzer() {
        // Call document skew detect interface to get coordinate data
        val detectTask = analyzer!!.asyncDocumentSkewDetect(mlFrame)
        detectTask.addOnSuccessListener { detectResult ->
            if (detectResult != null) {
                val resultCode = detectResult.getResultCode()
                // Detect success.
                if (resultCode == MLDocumentSkewCorrectionConstant.SUCCESS) {
                    val leftTop = detectResult.leftTopPosition
                    val rightTop = detectResult.rightTopPosition
                    val leftBottom = detectResult.leftBottomPosition
                    val rightBottom = detectResult.rightBottomPosition
                    val coordinates: MutableList<Point> =  ArrayList()
                    coordinates.add(leftTop)
                    coordinates.add(rightTop)
                    coordinates.add(rightBottom)
                    coordinates.add(leftBottom)
                    this@DocumentCaptureActivity.setDetectData(MLDocumentSkewCorrectionCoordinateInput(coordinates))
                    this@DocumentCaptureActivity.refineImg()}
                else if (resultCode == MLDocumentSkewCorrectionConstant.IMAGE_DATA_ERROR) {
                    // Parameters error.
                    Log.e(TAG, "Parameters error!")
                    this@DocumentCaptureActivity.displayFailure() }
                else if (resultCode == MLDocumentSkewCorrectionConstant.DETECT_FAILD) {
                    // Detect failure.
                    Log.e(TAG, "Detect failed!")
                    this@DocumentCaptureActivity.displayFailure()
                }
            } else {
                // Detect exception.
                Log.e(TAG, "Detect exception!")
                this@DocumentCaptureActivity.displayFailure()
            }
        }.addOnFailureListener { e -> // Processing logic for detect failure.
            Log.e(TAG, e.message + "")
            this@DocumentCaptureActivity.displayFailure()
        }
    }

    // Show result
    private fun displaySuccess(refineResult: MLDocumentSkewCorrectionResult) {
        if (bitmap == null) {
            this.displayFailure()
            return
        }
        // Draw the portrait with a transparent background.
        val corrected = refineResult.getCorrected()
        if (corrected != null) {
            mImageView!!.setImageBitmap(corrected)
        } else {
            this.displayFailure()
        }
    }

    private fun displayFailure() {
        Toast.makeText(this.applicationContext, "Fail", Toast.LENGTH_LONG).show()
    }

    private fun setDetectData(input: MLDocumentSkewCorrectionCoordinateInput) {
        this.input = input
    }

    // Refine image
    private fun refineImg() {
        // Call refine image interface
        val correctionTask = analyzer!!.asyncDocumentSkewCorrect(mlFrame, input)
        correctionTask.addOnSuccessListener { refineResult ->
            if (refineResult != null) {
                val resultCode = refineResult.getResultCode()
                if (resultCode == MLDocumentSkewCorrectionConstant.SUCCESS) {
                    this.displaySuccess(refineResult)
                } else if (resultCode == MLDocumentSkewCorrectionConstant.IMAGE_DATA_ERROR) {
                    // Parameters error.
                    Log.e(TAG, "Parameters error!")
                    this@DocumentCaptureActivity.displayFailure()
                } else if (resultCode == MLDocumentSkewCorrectionConstant.CORRECTION_FAILD) {
                    // Correct failure.
                    Log.e(TAG, "Correct failed!")
                    this@DocumentCaptureActivity.displayFailure()
                }
            } else {
                // Correct exception.
                Log.e(TAG, "Correct exception!")
                this@DocumentCaptureActivity.displayFailure()
            }
        }.addOnFailureListener { // Processing logic for refine failure.
            this@DocumentCaptureActivity.displayFailure()
        }
    }

    override fun onDestroy() {
        super.onDestroy()
        if (analyzer != null) {
            try {
                analyzer!!.stop()
            } catch (e: IOException) {
                Log.e(TAG, "Stop failed: " + e.message)
            }
        }
    }

    override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
        super.onActivityResult(requestCode, resultCode, data)
        if (resultCode == RESULT_OK && requestCode == 1) {
            imageUri = data!!.data
            try {
                bitmap = MediaStore.Images.Media.getBitmap(this.contentResolver, imageUri)
                // Create a MLFrame by using the bitmap.
                mlFrame = MLFrame.Creator().setBitmap(bitmap).create()
            } catch (e: IOException) {
                e.printStackTrace()
            }
            // BitmapFactory.decodeResource(getResources(), R.drawable.new1);
            FlagCameraClickDone = true
            findViewById<View>(R.id.btn_click).visibility = View.VISIBLE
            mImageView!!.setImageURI(imageUri)
        }
    }

}
Enter fullscreen mode Exit fullscreen mode

In the activity_document_capture.xml we can create the UI screen.

<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:orientation="vertical"
    tools:context=".mlkit.DocumentCaptureActivity">

    <ImageView
        android:id="@+id/image_result"
        android:layout_width="400dp"
        android:layout_height="520dp"
        android:paddingLeft="5dp"
        android:paddingTop="5dp"
        android:src="@drawable/slip"
        android:paddingStart="5dp"
        android:paddingBottom="5dp"/>
    <LinearLayout
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:orientation="horizontal"
        android:weightSum="4"
        android:layout_alignParentBottom="true"
        android:gravity="center_horizontal" >
        <ImageView
            android:id="@+id/cam"
            android:layout_width="0dp"
            android:layout_height="41dp"
            android:layout_margin="4dp"
            android:layout_weight="1"
            android:text="sample"
            app:srcCompat="@android:drawable/ic_menu_gallery" />
        <Button
            android:id="@+id/btn_click"
            android:layout_width="10dp"
            android:layout_height="wrap_content"
            android:layout_margin="4dp"
            android:textSize="19sp"
            android:layout_weight="2"
            android:textAllCaps="false"
            android:text="Capture" />
        <ImageView
            android:id="@+id/fab"
            android:layout_width="0dp"
            android:layout_height="42dp"
            android:layout_margin="4dp"
            android:layout_weight="1"
            android:text="sample"
            app:srcCompat="@android:drawable/ic_menu_camera" />
    </LinearLayout>

</RelativeLayout>
Enter fullscreen mode Exit fullscreen mode

Demo

Image description

Tips and Tricks

  1. Make sure you are already registered as Huawei developer.

  2. Set minSDK version to 21 or later, otherwise you will get AndriodManifest merge issue.

  3. Make sure you have added the agconnect-services.json file to app folder.

  4. Make sure you have added SHA-256 fingerprint without fail.

  5. Make sure all the dependencies are added properly.

Conclusion

In this article, we have learnt to correct the document position using Document Skew Correction feature by Huawei ML Kit. This service automatically identifies the location of a document in an image and adjust the shooting angle to angle facing the document, even if the document is tilted.

I hope you have read this article. If you found it is helpful, please provide likes and comments.

Reference

ML Kit - Document Skew Correction

ML KitTraining Video

Top comments (0)