DEV Community

Jackson for HMS Core

Posted on

Top Tips for Developing a Recordist Function

Efficient records management is more relevant now than ever. In our digital age, huge growth of information — audio, video, and more — must be handled in a limited time. This makes a real-time transcription function essential, because it is useful in many scenarios.
In audio or video conferencing, this function records meeting minutes that I can refer to later, which is more convenient than writing them all by myself. I've seen my kids struggling to take notes during their online courses, so I know this process can be so much easier with the help of the transcription function. In short, it removed the job of writing down everything the teacher says, allowing the kids to focus on the lecture itself and easily review the content again later. Also, the live captions provide viewers with real-time subtitles, for a better watching experience.
As a coder, I am a believer in "Actions speak louder than words". That's why I developed a real-time transcription function, with the help of a real-time transcription capability from ML Kit, like this.


Image description
This function transcribes up to 5 hours of speech into Chinese, English (or both), and French languages in real time. In addition, the output text is punctuated and contains timestamps.
This function has some requirements: the support for French is dependent on the mobile phone model, whereas Chinese and English are available on all mobile phone models. Also, the function requires Internet connection.
Okay, let's move on to the point of this article: How I developed this real-time transcription function.

Development Procedure

i. Make necessary preparations. This is described in detail in the References section.
ii. Create and then configure a speech recognizer.

MLSpeechRealTimeTranscriptionConfig config = new MLSpeechRealTimeTranscriptionConfig.Factory()
    // Set the language, which can be Chinese, English, both Chinese and English, or French.
    // Punctuate the text recognized from the speech.
    // Set the sentence offset.
    // Set the word offset.
MLSpeechRealTimeTranscription mSpeechRecognizer = MLSpeechRealTimeTranscription.getInstance();
Enter fullscreen mode Exit fullscreen mode

iii. Create a callback for the speech recognition result listener.

// Use the callback to implement the [MLSpeechRealTimeTranscriptionListener]( API and methods in the API.
Protected class SpeechRecognitionListener implements MLSpeechRealTimeTranscriptionListener{
    public void onStartListening() {
        // The recorder starts to receive speech.

    public void onStartingOfSpeech() {
        // The speech recognizer detects the user speaking.

    public void onVoiceDataReceived(byte[] data, float energy, Bundle bundle) {
        // Return the original PCM stream and audio power to the user. The API does not run in the main thread, and the return result is processed in a sub-thread.

    public void onRecognizingResults(Bundle partialResults) {
        // Receive recognized text from **MLSpeechRealTimeTranscription**.

    public void onError(int error, String errorMessage) {
        // Callback when an error occurs during recognition.

    public void onState(int state,Bundle params) {
        // Notify the app of the recognizer status change.
Enter fullscreen mode Exit fullscreen mode

iv. Bind the speech recognizer.

mSpeechRecognizer.setRealTimeTranscriptionListener(new SpeechRecognitionListener());
Enter fullscreen mode Exit fullscreen mode

v. Call startRecognizing to begin speech recognition.

Enter fullscreen mode Exit fullscreen mode

vi. Stop recognition and release resources occupied by the recognizer when the recognition is complete.

if (mSpeechRecognizer!= null) {
Enter fullscreen mode Exit fullscreen mode


Top comments (0)