The figure below shows a WER (Word Error Rate) breakdown by languages of the Fleurs dataset using the large-v2 model (The smaller the numbers, the better the performance). Whisper's performance varies widely depending on the language. We observed that the difference becomes less significant for the small.en and medium.en models. en models for English-only applications tend to perform better, especially for the tiny.en and base.en models. Below are the names of the available models and their approximate memory requirements and relative speed. There are five model sizes, four with English-only versions, offering speed and accuracy tradeoffs. Pip install setuptools-rust Available models and languages You can download and install (or update to) the latest release of Whisper with the following command: The codebase also depends on a few Python packages, most notably OpenAI's tiktoken for their fast tokenizer implementation. We used Python 3.9.9 and PyTorch 1.10.1 to train and test our models, but the codebase is expected to be compatible with Python 3.8-3.11 and recent PyTorch versions. The multitask training format uses a set of special tokens that serve as task specifiers or classification targets. These tasks are jointly represented as a sequence of tokens to be predicted by the decoder, allowing a single model to replace many stages of a traditional speech-processing pipeline. ApproachĪ Transformer sequence-to-sequence model is trained on various speech processing tasks, including multilingual speech recognition, speech translation, spoken language identification, and voice activity detection. It is trained on a large dataset of diverse audio and is also a multitasking model that can perform multilingual speech recognition, speech translation, and language identification. Whisper is a general-purpose speech recognition model.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |