whisper-medium
Note:
Previously, our STT models operated via a single API call to POST https://api.apilaplas.com/v1/stt. You can view the API schema here.
Now, we are switching to a new two-step process:
POST https://api.apilaplas.com/v1/stt/create– Creates and submits a speech-to-text processing task to the server. This method accepts the same parameters as the old version but returns ageneration_idinstead of the final transcript.GET https://api.apilaplas.com/v1/stt/{generation_id}– Retrieves the generated transcript from the server using thegeneration_idobtained from the previous API call.
This approach helps prevent generation failures due to timeouts. We've prepared a couple of examples below to make the transition to the new STT API easier for you.
Model Overview
The Whisper models are primarily for AI research, focusing on model robustness, generalization, and biases, and are also effective for English speech recognition. The use of Whisper models for transcribing non-consensual recordings or in high-risk decision-making contexts is strongly discouraged due to potential inaccuracies and ethical concerns.
The models are trained using 680,000 hours of audio and corresponding transcripts from the internet, with 65% being English audio and transcripts, 18% non-English audio with English transcripts, and 17% non-English audio with matching non-English transcripts, covering 98 languages in total.
Setup your API Key
If you don’t have an API key for the Apilaplas API yet, feel free to use our Quickstart guide.
Submit a request
API Schema
Creating and sending a speech-to-text conversion task to the server
Bearer key
Requesting the result of the task from the server using the generation_id
Quick Code Examples
Let's use the #g1_whisper-medium model to transcribe the following audio fragment:
Example #1: Processing a Speech Audio File via URL
Example #2: Processing a Speech Audio File via File Path
Last updated