DEV Community

HMS Community
HMS Community

Posted on

Find hand points using Hand Gesture Recognition feature by Huawei ML Kit in Android (Kotlin)

Image description
Introduction

In this article, we can learn how to find the hand key points using Huawei ML Kit of Hand Gesture Recognition feature. This service provides two capabilities: hand keypoint detection and hand gesture recognition. The hand keypoint detection capability can detect 21 hand keypoints (including fingertips, knuckles, and wrists) and return the positions of the key points. The hand gesture recognition capability can detect and return the positions of all rectangle areas of the hand from images and videos, and the type and confidence of a gesture. This capability can recognize 14 gestures, including the thumbs-up/down, OK sign, fist, finger heart, and number gestures from 1 to 9. Both capabilities support detection from static images and real-time camera streams.

Use Cases

Hand keypoint detection is widely used in daily life. For example, after integrating this capability, users can convert the detected hand keypoints into a 2D model, and synchronize the model to the character model, to produce a vivid 2D animation. In addition, when shooting a short video, special effects can be generated based on dynamic hand trajectories. This allows users to play finger games, thereby making the video shooting process more creative and interactive. Hand gesture recognition enables your app to call various commands by recognizing users' gestures. Users can control their smart home appliances without touching them. In this way, this capability makes the human-machine interaction more efficient.

Requirements

  1. Any operating system (MacOS, Linux and Windows).

  2. Must have a Huawei phone with HMS 4.0.0.300 or later.

  3. Must have a laptop or desktop with Android Studio, Jdk 1.8, SDK platform 26 and Gradle 4.6 and above installed.

  4. Minimum API Level 21 is required.

  5. Required EMUI 9.0.0 and later version devices.

How to integrate HMS Dependencies

  • First register as Huawei developer and complete identity verification in Huawei developers website, refer to register a Huawei ID.

  • Create a project in android studio, refer Creating an Android Studio Project.

  • Generate a SHA-256 certificate fingerprint.

  • To generate SHA-256 certificate fingerprint. On right-upper corner of android project click Gradle, choose Project Name > Tasks > android, and then click signingReport, as follows.

Image description

Note: Project Name depends on the user created name.

  • Create an App in AppGallery Connect.

  • Download the agconnect-services.json file from App information, copy and paste in android Project under app directory, as follows.
    Image description

  • Enter SHA-256 certificate fingerprint and click Save button, as follows.

Image description

  • Click Manage APIs tab and enable ML Kit.

Image description

  • Add the below maven URL in build.gradle(Project) file under the repositories of buildscript, dependencies and allprojects, refer Add Configuration.
maven { url 'http://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'

Enter fullscreen mode Exit fullscreen mode
  • Add the below plugin and dependencies in build.gradle(Module) file.
apply plugin: 'com.huawei.agconnect'
// Huawei AGC
implementation 'com.huawei.agconnect:agconnect-core:1.6.5.300'
/ ML Kit Hand Gesture
// Import the base SDK
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint:2.1.0.300'
// Import the hand keypoint detection model package.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint-model:2.1.0.300'
Enter fullscreen mode Exit fullscreen mode
  • Now Sync the gradle.

  • Add the required permission to the AndroidManifest.xml file.

<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.INTERNET" />
Enter fullscreen mode Exit fullscreen mode

Let us move to development

I have created a project on Android studio with empty activity let us start coding.

In the MainActivity.kt we can find the business logic for buttons.

class MainActivity : AppCompatActivity() {

    private var staticButton: Button? = null
    private var liveButton: Button? = null

    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)

        staticButton = findViewById(R.id.btn_static)
        liveButton = findViewById(R.id.btn_live)

        staticButton!!.setOnClickListener {
            val intent = Intent(this@MainActivity, StaticHandKeyPointAnalyse::class.java)
            startActivity(intent)
        }
        liveButton!!.setOnClickListener {
            val intent = Intent(this@MainActivity, LiveHandKeyPointAnalyse::class.java)
            startActivity(intent)
        }

    }


}
Enter fullscreen mode Exit fullscreen mode

In the LiveHandKeyPointAnalyse.kt we can find the business logic for live analysis.

class LiveHandKeyPointAnalyse : AppCompatActivity(), View.OnClickListener {

    private val TAG: String = LiveHandKeyPointAnalyse::class.java.getSimpleName()
    private var mPreview: LensEnginePreview? = null
    private var mOverlay: GraphicOverlay? = null
    private var mFacingSwitch: Button? = null
    private var mAnalyzer: MLHandKeypointAnalyzer? = null
    private var mLensEngine: LensEngine? = null
    private val lensType = LensEngine.BACK_LENS
    private var mLensType = 0
    private var isFront = false
    private var isPermissionRequested = false
    private val CAMERA_PERMISSION_CODE = 0
    private val ALL_PERMISSION = arrayOf(Manifest.permission.CAMERA)

    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_live_hand_key_point_analyse)

        if (savedInstanceState != null) {
            mLensType = savedInstanceState.getInt("lensType")
        }
        initView()
        createHandAnalyzer()
        if (Camera.getNumberOfCameras() == 1) {
            mFacingSwitch!!.visibility = View.GONE
        }
        // Checking Camera Permissions
        if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) {
            createLensEngine()
        } else {
            checkPermission()
        }


    }


    private fun initView() {
        mPreview = findViewById(R.id.hand_preview)
        mOverlay = findViewById(R.id.hand_overlay)
        mFacingSwitch = findViewById(R.id.handswitch)
        mFacingSwitch!!.setOnClickListener(this)
    }

    private fun createHandAnalyzer() {
        // Create a  analyzer. You can create an analyzer using the provided customized face detection parameter: MLHandKeypointAnalyzerSetting
        val setting = MLHandKeypointAnalyzerSetting.Factory()
            .setMaxHandResults(2)
            .setSceneType(MLHandKeypointAnalyzerSetting.TYPE_ALL)
            .create()
        mAnalyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer(setting)
        mAnalyzer!!.setTransactor(HandAnalyzerTransactor(this, mOverlay!!) )
    }

    // Check the permissions required by the SDK.
    private fun checkPermission() {
        if (Build.VERSION.SDK_INT >= 23 && !isPermissionRequested) {
            isPermissionRequested = true
            val permissionsList = ArrayList<String>()
            for (perm in getAllPermission()!!) {
                if (PackageManager.PERMISSION_GRANTED != checkSelfPermission(perm.toString())) {
                    permissionsList.add(perm.toString())
                }
            }
            if (!permissionsList.isEmpty()) {
                requestPermissions(permissionsList.toTypedArray(), 0)
            }
        }
    }

    private fun getAllPermission(): MutableList<Array<String>> {
        return Collections.unmodifiableList(listOf(ALL_PERMISSION))
    }

    private fun createLensEngine() {
        val context = this.applicationContext
        // Create LensEngine.
        mLensEngine = LensEngine.Creator(context, mAnalyzer)
            .setLensType(mLensType)
            .applyDisplayDimension(640, 480)
            .applyFps(25.0f)
            .enableAutomaticFocus(true)
            .create()
    }

    private fun startLensEngine() {
        if (mLensEngine != null) {
            try {
                mPreview!!.start(mLensEngine, mOverlay)
            } catch (e: IOException) {
                Log.e(TAG, "Failed to start lens engine.", e)
                mLensEngine!!.release()
                mLensEngine = null
            }
        }
    }

    // Permission application callback.
    override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<String?>, grantResults: IntArray) {
        var hasAllGranted = true
        if (requestCode == CAMERA_PERMISSION_CODE) {
            if (grantResults[0] == PackageManager.PERMISSION_GRANTED) {
                createLensEngine()
            } else if (grantResults[0] == PackageManager.PERMISSION_DENIED) {
                hasAllGranted = false
                if (!ActivityCompat.shouldShowRequestPermissionRationale(this, permissions[0]!!)) {
                    showWaringDialog()
                } else {
                    Toast.makeText(this, R.string.toast, Toast.LENGTH_SHORT).show()
                    finish()
                }
            }
            return
        }
        super.onRequestPermissionsResult(requestCode, permissions, grantResults)
    }

    override fun onSaveInstanceState(outState: Bundle) {
        outState.putInt("lensType", lensType)
        super.onSaveInstanceState(outState)
    }

    private class HandAnalyzerTransactor internal constructor(mainActivity: LiveHandKeyPointAnalyse?,
        private val mGraphicOverlay: GraphicOverlay) : MLTransactor<MLHandKeypoints?> {
        // Process the results returned by the analyzer.
        override fun transactResult(result: MLAnalyzer.Result<MLHandKeypoints?>) {
            mGraphicOverlay.clear()
            val handKeypointsSparseArray = result.analyseList
            val list: MutableList<MLHandKeypoints?> = ArrayList()
            for (i in 0 until handKeypointsSparseArray.size()) {
                list.add(handKeypointsSparseArray.valueAt(i))
            }
            val graphic = HandKeypointGraphic(mGraphicOverlay, list)
            mGraphicOverlay.add(graphic)
        }
        override fun destroy() {
            mGraphicOverlay.clear()
        }
    }

    override fun onClick(v: View?) {
        when (v!!.id) {
            R.id.handswitch -> switchCamera()
            else -> {}
        }
    }

    private fun switchCamera() {
        isFront = !isFront
        mLensType = if (isFront) {
            LensEngine.FRONT_LENS
        } else {
            LensEngine.BACK_LENS
        }
        if (mLensEngine != null) {
            mLensEngine!!.close()
        }
        createLensEngine()
        startLensEngine()
    }

    private fun showWaringDialog() {
        val dialog = AlertDialog.Builder(this)
        dialog.setMessage(R.string.Information_permission)
            .setPositiveButton(R.string.go_authorization,
                DialogInterface.OnClickListener { dialog, which ->
                    val intent = Intent(Settings.ACTION_APPLICATION_DETAILS_SETTINGS)
                    val uri = Uri.fromParts("package", applicationContext.packageName, null)
                    intent.data = uri
                    startActivity(intent)
                })
            .setNegativeButton("Cancel", DialogInterface.OnClickListener { dialog, which -> finish() })
            .setOnCancelListener(dialogInterface)
        dialog.setCancelable(false)
        dialog.show()
    }

    var dialogInterface = DialogInterface.OnCancelListener { }

    override fun onResume() {
        super.onResume()
        if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) {
            createLensEngine()
            startLensEngine()
        } else {
            checkPermission()
        }
    }

    override fun onPause() {
        super.onPause()
        mPreview!!.stop()
    }

    override fun onDestroy() {
        super.onDestroy()
        if (mLensEngine != null) {
            mLensEngine!!.release()
        }
        if (mAnalyzer != null) {
            mAnalyzer!!.stop()
        }
    }

}
Enter fullscreen mode Exit fullscreen mode

Create LensEnginePreview.kt class to find the business logic for lens engine view.

class LensEnginePreview(private val mContext: Context, attrs: AttributeSet?) : ViewGroup(mContext, attrs) {

    private val mSurfaceView: SurfaceView
    private var mStartRequested = false
    private var mSurfaceAvailable = false
    private var mLensEngine: LensEngine? = null
    private var mOverlay: GraphicOverlay? = null

    @Throws(IOException::class)
    fun start(lensEngine: LensEngine?) {
        if (lensEngine == null) {
            stop()
        }
        mLensEngine = lensEngine
        if (mLensEngine != null) {
            mStartRequested = true
            startIfReady()
        }
    }

    @Throws(IOException::class)
    fun start(lensEngine: LensEngine?, overlay: GraphicOverlay?) {
        mOverlay = overlay
        this.start(lensEngine)
    }

    fun stop() {
        if (mLensEngine != null) {
            mLensEngine!!.close()
        }
    }

    @Throws(IOException::class)
    private fun startIfReady() {
        if (mStartRequested && mSurfaceAvailable) {
            mLensEngine!!.run(mSurfaceView.holder)
            if (mOverlay != null) {
                val size = mLensEngine!!.displayDimension
                val min = Math.min(size.width, size.height)
                val max = Math.max(size.width, size.height)
                if (isPortraitMode) {
                    // Swap width and height sizes when in portrait, since it will be rotated by 90 degrees.
                    mOverlay!!.setCameraInfo(min, max, mLensEngine!!.lensType)
                } else {
                    mOverlay!!.setCameraInfo(max, min, mLensEngine!!.lensType)
                }
                mOverlay!!.clear()
            }
            mStartRequested = false
        }
    }

    private inner class SurfaceCallback : SurfaceHolder.Callback {
        override fun surfaceCreated(surface: SurfaceHolder) {
            mSurfaceAvailable = true
            try {
                startIfReady()
            } catch (e: IOException) {
                Log.e(TAG, "Could not start camera source.", e)
            }
        }
        override fun surfaceDestroyed(surface: SurfaceHolder) {
            mSurfaceAvailable = false
        }
        override fun surfaceChanged(holder: SurfaceHolder, format: Int, width: Int, height: Int) {}
    }

    override fun onLayout(changed: Boolean, left: Int, top: Int, right: Int, bottom: Int) {
        var previewWidth = 480
        var previewHeight = 360
        if (mLensEngine != null) {
            val size = mLensEngine!!.displayDimension
            if (size != null) {
                previewWidth = size.width
                previewHeight = size.height
            }
        }
        // Swap width and height sizes when in portrait, since it will be rotated 90 degrees
        if (isPortraitMode) {
            val tmp = previewWidth
            previewWidth = previewHeight
            previewHeight = tmp
        }
        val viewWidth = right - left
        val viewHeight = bottom - top
        val childWidth: Int
        val childHeight: Int
        var childXOffset = 0
        var childYOffset = 0
        val widthRatio = viewWidth.toFloat() / previewWidth.toFloat()
        val heightRatio = viewHeight.toFloat() / previewHeight.toFloat()
        // To fill the view with the camera preview, while also preserving the correct aspect ratio,
        // it is usually necessary to slightly oversize the child and to crop off portions along one
        // of the dimensions. We scale up based on the dimension requiring the most correction, and
        // compute a crop offset for the other dimension.
        if (widthRatio > heightRatio) {
            childWidth = viewWidth
            childHeight = (previewHeight.toFloat() * widthRatio).toInt()
            childYOffset = (childHeight - viewHeight) / 2
        } else {
            childWidth = (previewWidth.toFloat() * heightRatio).toInt()
            childHeight = viewHeight
            childXOffset = (childWidth - viewWidth) / 2
        }
        for (i in 0 until this.childCount) {
            // One dimension will be cropped. We shift child over or up by this offset and adjust
            // the size to maintain the proper aspect ratio.
            getChildAt(i).layout(-1 * childXOffset, -1 * childYOffset,
                childWidth - childXOffset,childHeight - childYOffset )
        }
        try {
            startIfReady()
        } catch (e: IOException) {
            Log.e(TAG, "Could not start camera source.", e)
        }
    }

    private val isPortraitMode: Boolean
        get() {
            val orientation = mContext.resources.configuration.orientation
            if (orientation == Configuration.ORIENTATION_LANDSCAPE) {
                return false
            }
            if (orientation == Configuration.ORIENTATION_PORTRAIT) {
                return true
            }
            Log.d(TAG, "isPortraitMode returning false by default")
            return false
        }

    companion object {
        private val TAG = LensEnginePreview::class.java.simpleName
    }

    init {
        mSurfaceView = SurfaceView(mContext)
        mSurfaceView.holder.addCallback(SurfaceCallback())
        this.addView(mSurfaceView)
    }


}
Enter fullscreen mode Exit fullscreen mode

Create HandKeypointGraphic.kt class to find the business logic for hand key point.

class HandKeypointGraphic(overlay: GraphicOverlay?, private val handKeypoints: MutableList<MLHandKeypoints?>) : GraphicOverlay.Graphic(overlay!!) {

    private val rectPaint: Paint
    private val idPaintnew: Paint
    companion object {
        private const val BOX_STROKE_WIDTH = 5.0f
    }

    private fun translateRect(rect: Rect): Rect {
        var left: Float = translateX(rect.left)
        var right: Float = translateX(rect.right)
        var bottom: Float = translateY(rect.bottom)
        var top: Float = translateY(rect.top)
        if (left > right) {
            val size = left
            left = right
            right = size
        }
        if (bottom < top) {
            val size = bottom
            bottom = top
            top = size
        }
        return Rect(left.toInt(), top.toInt(), right.toInt(), bottom.toInt())
    }

    init {
        val selectedColor = Color.WHITE
        idPaintnew = Paint()
        idPaintnew.color = Color.GREEN
        idPaintnew.textSize = 32f
        rectPaint = Paint()
        rectPaint.color = selectedColor
        rectPaint.style = Paint.Style.STROKE
        rectPaint.strokeWidth = BOX_STROKE_WIDTH
    }

    override fun draw(canvas: Canvas?) {
        for (i in handKeypoints.indices) {
            val mHandKeypoints = handKeypoints[i]
            if (mHandKeypoints!!.getHandKeypoints() == null) {
                continue
            }
            val rect = translateRect(handKeypoints[i]!!.getRect())
            canvas!!.drawRect(rect, rectPaint)
            for (handKeypoint in mHandKeypoints.getHandKeypoints()) {
                if (!(Math.abs(handKeypoint.getPointX() - 0f) == 0f && Math.abs(handKeypoint.getPointY() - 0f) == 0f)) {
                    canvas!!.drawCircle(translateX(handKeypoint.getPointX().toInt()),
                        translateY(handKeypoint.getPointY().toInt()), 24f, idPaintnew)
                }
            }
        }
    }


}
Enter fullscreen mode Exit fullscreen mode

Create GraphicOverlay.kt class to find the business logic for graphic overlay.

class GraphicOverlay(context: Context?, attrs: AttributeSet?) : View(context, attrs) {

    private val mLock = Any()
    private var mPreviewWidth = 0
    private var mWidthScaleFactor = 1.0f
    private var mPreviewHeight = 0
    private var mHeightScaleFactor = 1.0f
    private var mFacing = LensEngine.BACK_LENS
    private val mGraphics: MutableSet<Graphic> = HashSet()

    // Base class for a custom graphics object to be rendered within the graphic overlay. Subclass
    // this and implement the [Graphic.draw] method to define the graphics element. Add instances to the overlay using [GraphicOverlay.add].
    abstract class Graphic(private val mOverlay: GraphicOverlay) {
         // Draw the graphic on the supplied canvas. Drawing should use the following methods to
         // convert to view coordinates for the graphics that are drawn:
         // 1. [Graphic.scaleX] and [Graphic.scaleY] adjust the size of the supplied value from the preview scale to the view scale.
         // 2. [Graphic.translateX] and [Graphic.translateY] adjust the coordinate from the preview's coordinate system to the view coordinate system.
         // @param canvas drawing canvas
        abstract fun draw(canvas: Canvas?)

        // Adjusts a horizontal value of the supplied value from the preview scale to the view scale.
        fun scaleX(horizontal: Float): Float {
            return horizontal * mOverlay.mWidthScaleFactor
        }


        // Adjusts a vertical value of the supplied value from the preview scale to the view scale.
        fun scaleY(vertical: Float): Float {
            return vertical * mOverlay.mHeightScaleFactor
        }

        // Adjusts the x coordinate from the preview's coordinate system to the view coordinate system.
        fun translateX(x: Int): Float {
            return if (mOverlay.mFacing == LensEngine.FRONT_LENS) {
                mOverlay.width - scaleX(x.toFloat())
            } else {
                scaleX(x.toFloat())
            }
        }
        // Adjusts the y coordinate from the preview's coordinate system to the view coordinate system.
        fun translateY(y: Int): Float {
            return scaleY(y.toFloat())
        }

    }

    // Removes all graphics from the overlay.
    fun clear() {
        synchronized(mLock) { mGraphics.clear() }
        postInvalidate()
    }

    // Adds a graphic to the overlay.
    fun add(graphic: Graphic) {
        synchronized(mLock) { mGraphics.add(graphic) }
        postInvalidate()
    }

    // Sets the camera attributes for size and facing direction, which informs how to transform image coordinates later.
    fun setCameraInfo(previewWidth: Int, previewHeight: Int, facing: Int) {
        synchronized(mLock) {
            mPreviewWidth = previewWidth
            mPreviewHeight = previewHeight
            mFacing = facing
        }
        postInvalidate()
    }

    // Draws the overlay with its associated graphic objects.
    override fun onDraw(canvas: Canvas) {
        super.onDraw(canvas)
        synchronized(mLock) {
            if (mPreviewWidth != 0 && mPreviewHeight != 0) {
                mWidthScaleFactor = canvas.width.toFloat() / mPreviewWidth.toFloat()
                mHeightScaleFactor = canvas.height.toFloat() / mPreviewHeight.toFloat()
            }
            for (graphic in mGraphics) {
                graphic.draw(canvas)
            }
        }
    }


}
Enter fullscreen mode Exit fullscreen mode

In the StaticHandKeyPointAnalyse.kt we can find the business logic static hand key point analyses.

class StaticHandKeyPointAnalyse : AppCompatActivity() {

    var analyzer: MLHandKeypointAnalyzer? = null
    var bitmap: Bitmap? = null
    var mutableBitmap: Bitmap? = null
    var mlFrame: MLFrame? = null
    var imageSelected: ImageView? = null
    var picUri: Uri? = null
    var pickButton: Button? = null
    var analyzeButton:Button? = null
    var permissions = arrayOf(Manifest.permission.READ_EXTERNAL_STORAGE, Manifest.permission.WRITE_EXTERNAL_STORAGE,
        Manifest.permission.CAMERA)

    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_static_hand_key_point_analyse)

        pickButton = findViewById(R.id.pick_img)
        analyzeButton = findViewById(R.id.analyse_img)
        imageSelected = findViewById(R.id.selected_img)
        initialiseSettings()
        pickButton!!.setOnClickListener(View.OnClickListener {
            pickRequiredImage()
        })
        analyzeButton!!.setOnClickListener(View.OnClickListener {
            asynchronouslyStaticHandkey()
        })
        checkRequiredPermission()

    }

    private fun checkRequiredPermission() {
        if (PackageManager.PERMISSION_GRANTED != ContextCompat.checkSelfPermission(this, Manifest.permission.WRITE_EXTERNAL_STORAGE)
            || PackageManager.PERMISSION_GRANTED != ContextCompat.checkSelfPermission(this, Manifest.permission.READ_EXTERNAL_STORAGE)
            || PackageManager.PERMISSION_GRANTED != ContextCompat.checkSelfPermission(this, Manifest.permission.CAMERA)) {
            ActivityCompat.requestPermissions(this, permissions, 111)
        }
    }

    private fun initialiseSettings() {
        val setting = MLHandKeypointAnalyzerSetting.Factory() // MLHandKeypointAnalyzerSetting.TYPE_ALL indicates that all results are returned.
                // MLHandKeypointAnalyzerSetting.TYPE_KEYPOINT_ONLY indicates that only hand keypoint information is returned.
                // MLHandKeypointAnalyzerSetting.TYPE_RECT_ONLY indicates that only palm information is returned.
                .setSceneType(MLHandKeypointAnalyzerSetting.TYPE_ALL) // Set the maximum number of hand regions that can be detected in an image. By default, a maximum of 10 hand regions can be detected.
                .setMaxHandResults(1)
                .create()
        analyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer(setting)
    }

    private fun asynchronouslyStaticHandkey() {
        val task = analyzer!!.asyncAnalyseFrame(mlFrame)
        task.addOnSuccessListener { results ->
            val canvas = Canvas(mutableBitmap!!)
            val paint = Paint()
            paint.color = Color.GREEN
            paint.style = Paint.Style.FILL
            val mlHandKeypoints = results[0]
            for (mlHandKeypoint in mlHandKeypoints.getHandKeypoints()) {
                canvas.drawCircle(mlHandKeypoint.pointX, mlHandKeypoint.pointY, 48f, paint)
            }
            imageSelected!!.setImageBitmap(mutableBitmap)
            checkAnalyserForStop()
        }.addOnFailureListener { // Detection failure.
            checkAnalyserForStop()
        }
    }

    private fun checkAnalyserForStop() {
        if (analyzer != null) {
            analyzer!!.stop()
        }
    }

    private fun pickRequiredImage() {
        val intent = Intent()
        intent.type = "image/*"
        intent.action = Intent.ACTION_PICK
        startActivityForResult(Intent.createChooser(intent, "Select Picture"), 20)
    }

    override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
        super.onActivityResult(requestCode, resultCode, data)
        if (requestCode == 20 && resultCode == RESULT_OK && null != data) {
            picUri = data.data
            val filePathColumn = arrayOf(MediaStore.Images.Media.DATA)
            val cursor = contentResolver.query( picUri!!, filePathColumn, null, null, null)
            cursor!!.moveToFirst()
            cursor.close()
            imageSelected!!.setImageURI(picUri)
            imageSelected!!.invalidate()
            val drawable = imageSelected!!.drawable as BitmapDrawable
            bitmap = drawable.bitmap
            mutableBitmap = bitmap!!.copy(Bitmap.Config.ARGB_8888, true)
            mlFrame = null
            mlFrame = MLFrame.fromBitmap(bitmap)
        }
    }


}
Enter fullscreen mode Exit fullscreen mode

In the activity_main.xml we can create the UI screen.

<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    tools:context=".MainActivity">

    <Button
        android:id="@+id/btn_static"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:text="Static Detection"
        android:textAllCaps="false"
        android:textSize="18sp"
        app:layout_constraintBottom_toTopOf="@+id/btn_live"
        app:layout_constraintLeft_toLeftOf="parent"
        app:layout_constraintRight_toRightOf="parent"
        android:textColor="@color/black"
        style="@style/Widget.MaterialComponents.Button.MyTextButton"
        app:layout_constraintTop_toTopOf="parent" />

    <Button
        android:id="@+id/btn_live"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:text="Live Detection"
        android:textAllCaps="false"
        android:textSize="18sp"
        android:layout_marginBottom="150dp"
        app:layout_constraintBottom_toBottomOf="parent"
        app:layout_constraintLeft_toLeftOf="parent"
        android:textColor="@color/black"
        style="@style/Widget.MaterialComponents.Button.MyTextButton"
        app:layout_constraintRight_toRightOf="parent" />

</androidx.constraintlayout.widget.ConstraintLayout>
Enter fullscreen mode Exit fullscreen mode

In the activity_live_hand_key_point_analyse.xml we can create the UI screen.

<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    tools:context=".LiveHandKeyPointAnalyse">

    <com.example.mlhandgesturesample.LensEnginePreview
        android:id="@+id/hand_preview"
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        tools:ignore="MissingClass">
        <com.example.mlhandgesturesample.GraphicOverlay
            android:id="@+id/hand_overlay"
            android:layout_width="match_parent"
            android:layout_height="match_parent" />
    </com.example.mlhandgesturesample.LensEnginePreview>

    <Button
        android:id="@+id/handswitch"
        android:layout_width="35dp"
        android:layout_height="35dp"
        app:layout_constraintBottom_toBottomOf="parent"
        app:layout_constraintStart_toStartOf="parent"
        app:layout_constraintEnd_toEndOf="parent"
        android:layout_marginBottom="35dp"
        android:background="@drawable/front_back_switch"
        android:textOff=""
        android:textOn=""
        tools:ignore="MissingConstraints" />

</androidx.constraintlayout.widget.ConstraintLayout>
Enter fullscreen mode Exit fullscreen mode

In the activity_static_hand_key_point_analyse.xml we can create the UI screen.

<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    tools:context=".StaticHandKeyPointAnalyse">

    <com.google.android.material.button.MaterialButton
        android:id="@+id/pick_img"
        android:text="Pick Image"
        android:textSize="18sp"
        android:textColor="@android:color/black"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:textAllCaps="false"
        app:layout_constraintTop_toTopOf="parent"
        app:layout_constraintBottom_toTopOf="@id/selected_img"
        app:layout_constraintLeft_toLeftOf="@id/selected_img"
        app:layout_constraintRight_toRightOf="@id/selected_img"
        style="@style/Widget.MaterialComponents.Button.MyTextButton"/>

    <ImageView
        android:visibility="visible"
        android:id="@+id/selected_img"
        android:layout_width="350dp"
        android:layout_height="350dp"
        app:layout_constraintBottom_toBottomOf="parent"
        app:layout_constraintLeft_toLeftOf="parent"
        app:layout_constraintRight_toRightOf="parent"
        app:layout_constraintTop_toTopOf="parent" />

    <com.google.android.material.button.MaterialButton
        android:id="@+id/analyse_img"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:textColor="@android:color/black"
        android:text="Analyse"
        android:textSize="18sp"
        android:textAllCaps="false"
        app:layout_constraintBottom_toBottomOf="parent"
        app:layout_constraintTop_toBottomOf="@id/selected_img"
        app:layout_constraintLeft_toLeftOf="@id/selected_img"
        app:layout_constraintRight_toRightOf="@id/selected_img"
        style="@style/Widget.MaterialComponents.Button.MyTextButton"/>

</androidx.constraintlayout.widget.ConstraintLayout>
Enter fullscreen mode Exit fullscreen mode

Demo

Image description

Tips and Tricks

  1. Make sure you are already registered as Huawei developer.

  2. Set minSDK version to 21 or later, otherwise you will get AndriodManifest merge issue.

  3. Make sure you have added the agconnect-services.json file to app folder.

  4. Make sure you have added SHA-256 fingerprint without fail.

  5. Make sure all the dependencies are added properly.

Conclusion

In this article, we have learned how to find the hand key points using Huawei ML Kit of Hand Gesture Recognition feature. This service provides two capabilities: hand keypoint detection and hand gesture recognition. The hand keypoint detection capability can detect 21 hand keypoints (including fingertips, knuckles, and wrists) and return the positions of the key points. The hand gesture recognition capability can detect and return the positions of all rectangle areas of the hand from images and videos, and the type and confidence of a gesture. This capability can recognize 14 gestures, including the thumbs-up/down, OK sign, fist, finger heart, and number gestures from 1 to 9. Both capabilities support detection from static images and real-time camera streams.

I hope you have read this article. If you found it is helpful, please provide likes and comments.

Reference

ML Kit – Hand Gesture Recognition

ML Kit – Training Video

Top comments (1)

Collapse
 
russkulas profile image
Russ Kulas

I am see a website that are working on Health services in Orlando, FL I want to make some code as Serene Behavioral Health will you provide me it.