Hi!
Today’s post is how to use Whisper to get the text transcription from my podcast episode audio. Based on the Podcast Copilot sample, I decided to use Whisper to do this. Let’s start!
OpenAI – Whisper
Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multitasking model that can perform multilingual speech recognition, speech translation, and language identification.
Whisper GitHub
You can get more information about Whisper here
- Blog, Introducing Whisper
- GitHub, https://github.com/openai/whisper
And, once installed (simple python package), it’s super easy to use:
import whisper
model = whisper.load_model("base")
result = model. Transcribe("audio.mp3")
print(result["text"])
Super easy! However, this approach may trigger some memory problems with large audio files.
The approach in the original Podcast Copilot is to split the podcast audio into small chunks and get the transcript for each one of the small pieces. In example:
# Chunk up the audio file
sound_file = AudioSegment.from_mp3(podcast_audio_file)
audio_chunks = split_on_silence(sound_file, min_silence_len=1000, silence_thresh=-40 )
count = len(audio_chunks)
print("Audio split into " + str(count) + " audio chunks \n")
# Call Whisper to transcribe audio
model = whisper.load_model("base")
transcript = ""
for i, chunk in enumerate(audio_chunks):
if i < 10 or i > count - 10:
out_file = "chunk{0}.wav".format(i)
print("\r\nExporting >>", out_file, " - ", i, "/", count)
chunk.export(out_file, format="wav")
result = model.transcribe(out_file)
transcriptChunk = result["text"]
print(transcriptChunk)
transcript += " " + transcriptChunk
# Print transcript
print("Transcript: \n")
print(transcript)
print("\n")
And that’s it! Depending on your machine, this may take some time. I added a start/end check, so you can get an idea of how much time you need to spend processing an episode.
In example, for a 10 min audio, transcript done with an estimated time of 06:05.
The full source code is here:
Happy coding!
Greetings
El Bruno
More posts in my blog ElBruno.com.
More info in https://beacons.ai/elbruno
Top comments (0)