Have you ever wondered how to build Android apps that use machine learning? Then you’ve come to the right place. This article covers the basics of Google’s ML Kit sdk, which offers a one stop solution for mobile developers, both iOS and Android, that allows you to incorporate a wide range of machine learning use cases into your mobile application.
The ML Kit sdk is broadly classified into two parts:
- Vision APIs: Barcode Scanning, Face Detection, Image Labelling, Object Detection and Tracking, Text Recognition, Digital Ink Recognition, Pose Detection and Selfie Segmentation.
- Natural Language APIs: Language ID, On-device Translation, Smart Reply and Entity Extraction
As an Android developer, I’ve used ML Kit sdk in my projects, so when I had the opportunity to speak at Programmers’ Week 2021 (Cognizant Softvision’s largest tech event), I knew this was the right choice for a presentation topic.
During my Programmers’ Week presentation, I gave a brief introduction on each of the ML Kit sdk APIs mentioned above, then moved into a closer focus on the Pose Detection API.
When it comes to Pose Detection, you can use the mobile’s camera (preferably using CameraX of Android Jetpack) for detecting poses and then create/train a machine learning model for pose classification. An example of a pose classification would be if the person detected in the frame is doing push-ups properly or not. Also, you can run pose detection in two modes either in STREAM_MODE or SINGLE_IMAGE_MODE. If a person is detected in the frame, then the Pose detection API returns a Pose object with 33 PoseLandmarks.
To learn more about Google’s ML Kit sdk, watch my presentation below, where you will also learn the step-by-step procedure to incorporate pose detection in your project.
Additional useful links for getting started with Google’s ML kit: