* Updated models download URL * Updated list of models available All of the high efficiency quantized models are rejected when trying to download. They exist on the server. Let's allow them. * added path prefix for whisper-cli in message to user. The message is misleading if this script is called from another script in a different folder. So the message has to be fixed. * undid download URL change I made earlier. Fixed filepath.Join(urlPath, model) bug. * Undid download URL change I made earlier. Seems that the old URL works but only when provided a model to download. Still doesn't explain why there's a different download URL that also works. Please elucidate in docs. * Fixed URLForModel Function's bug filepath.Join is designed for filesystem paths, and it uses backslashes (\) on Windows. URLs, however, require forward slashes (/), so the use of filepath.Join is inappropriate for constructing URLs. The fmt.Sprintf function ensures that forward slashes are used. * Fixed URL trailing / double slash bug Ensure no double slash by trimming trailing '/' from srcUrl if present * Fixed bad download URL, missing ggml prefix Not sure if that was a bug I introduced but it was trying to download without the prefix. * Added question before downloading all models. Added download size estimate HEAD Requests: Efficiently fetches file sizes without downloading the content. Interactive Workflow: Allows the user to make informed decisions about downloading all models. Safe Defaults: Aborts if the user does not explicitly confirm. * Fixed Unbuffered channel warning. warning in context.go : misuse of unbuffered os.Signal channel as argument to signal. The warning indicates that the unbuffered channel used in signal.Notify in context.go may be misused. In Go, unbuffered channels can cause potential deadlocks if signals are sent faster than they are received. * Fixed download size calculation, download URL prefix bug, added link to models URL for user. The URL formatter was prepending the model name to the formatted model name in the URL * Added logs and exes to gitignore * Delete bindings/go/examples/go-model-download/go-model-download.exe * Delete whisper_build.log |
||
---|---|---|
.. | ||
.gitignore | ||
convert-h5-to-coreml.py | ||
convert-h5-to-ggml.py | ||
convert-pt-to-ggml.py | ||
convert-whisper-to-coreml.py | ||
convert-whisper-to-openvino.py | ||
download-coreml-model.sh | ||
download-ggml-model.cmd | ||
download-ggml-model.sh | ||
for-tests-ggml-base.bin | ||
for-tests-ggml-base.en.bin | ||
for-tests-ggml-large.bin | ||
for-tests-ggml-medium.bin | ||
for-tests-ggml-medium.en.bin | ||
for-tests-ggml-small.bin | ||
for-tests-ggml-small.en.bin | ||
for-tests-ggml-tiny.bin | ||
for-tests-ggml-tiny.en.bin | ||
generate-coreml-interface.sh | ||
generate-coreml-model.sh | ||
ggml_to_pt.py | ||
README.md | ||
requirements-coreml.txt | ||
requirements-openvino.txt |
Whisper model files in custom ggml
format
The original Whisper PyTorch models provided by OpenAI
are converted to custom ggml
format in order to be able to load them in C/C++.
Conversion is performed using the convert-pt-to-ggml.py script.
There are three ways to obtain ggml
models:
1. Use download-ggml-model.sh to download pre-converted models
Example download:
$ ./download-ggml-model.sh base.en
Downloading ggml model base.en ...
models/ggml-base.en.bin 100%[=============================================>] 141.11M 5.41MB/s in 22s
Done! Model 'base.en' saved in 'models/ggml-base.en.bin'
You can now use it like this:
$ ./build/bin/whisper-cli -m models/ggml-base.en.bin -f samples/jfk.wav
2. Manually download pre-converted models
ggml
models are available from the following locations:
3. Convert with convert-pt-to-ggml.py
Download one of the models provided by OpenAI and generate the ggml
files using the convert-pt-to-ggml.py script.
Example conversion, assuming the original PyTorch files have been downloaded into ~/.cache/whisper
. Change ~/path/to/repo/whisper/
to the location for your copy of the Whisper source:
mkdir models/whisper-medium
python models/convert-pt-to-ggml.py ~/.cache/whisper/medium.pt ~/path/to/repo/whisper/ ./models/whisper-medium
mv ./models/whisper-medium/ggml-model.bin models/ggml-medium.bin
rmdir models/whisper-medium
Available models
Model | Disk | SHA |
---|---|---|
tiny | 75 MiB | bd577a113a864445d4c299885e0cb97d4ba92b5f |
tiny.en | 75 MiB | c78c86eb1a8faa21b369bcd33207cc90d64ae9df |
base | 142 MiB | 465707469ff3a37a2b9b8d8f89f2f99de7299dac |
base.en | 142 MiB | 137c40403d78fd54d454da0f9bd998f78703390c |
small | 466 MiB | 55356645c2b361a969dfd0ef2c5a50d530afd8d5 |
small.en | 466 MiB | db8a495a91d927739e50b3fc1cc4c6b8f6c2d022 |
small.en-tdrz | 465 MiB | b6c6e7e89af1a35c08e6de56b66ca6a02a2fdfa1 |
medium | 1.5 GiB | fd9727b6e1217c2f614f9b698455c4ffd82463b4 |
medium.en | 1.5 GiB | 8c30f0e44ce9560643ebd10bbe50cd20eafd3723 |
large-v1 | 2.9 GiB | b1caaf735c4cc1429223d5a74f0f4d0b9b59a299 |
large-v2 | 2.9 GiB | 0f4c8e34f21cf1a914c59d8b3ce882345ad349d6 |
large-v2-q5_0 | 1.1 GiB | 00e39f2196344e901b3a2bd5814807a769bd1630 |
large-v3 | 2.9 GiB | ad82bf6a9043ceed055076d0fd39f5f186ff8062 |
large-v3-q5_0 | 1.1 GiB | e6e2ed78495d403bef4b7cff42ef4aaadcfea8de |
large-v3-turbo | 1.5 GiB | 4af2b29d7ec73d781377bfd1758ca957a807e941 |
large-v3-turbo-q5_0 | 547 MiB | e050f7970618a659205450ad97eb95a18d69c9ee |
Models are multilingual unless the model name includes .en
. Models ending in -q5_0
are quantized. Models ending in -tdrz
support local diarization (marking of speaker turns) using tinydiarize. More information about models is available upstream (openai/whisper). The list above is a subset of the models supported by the download-ggml-model.sh script, but many more are available at https://huggingface.co/ggerganov/whisper.cpp/tree/main and elsewhere.
Model files for testing purposes
The model files prefixed with for-tests-
are empty (i.e. do not contain any weights) and are used by the CI for
testing purposes. They are directly included in this repository for convenience and the Github Actions CI uses them to
run various sanitizer tests.
Fine-tuned models
There are community efforts for creating fine-tuned Whisper models using extra training data. For example, this blog post describes a method for fine-tuning using Hugging Face (HF) Transformer implementation of Whisper. The produced models are in slightly different format compared to the original OpenAI format. To read the HF models you can use the convert-h5-to-ggml.py script like this:
git clone https://github.com/openai/whisper
git clone https://github.com/ggerganov/whisper.cpp
# clone HF fine-tuned model (this is just an example)
git clone https://huggingface.co/openai/whisper-medium
# convert the model to ggml
python3 ./whisper.cpp/models/convert-h5-to-ggml.py ./whisper-medium/ ./whisper .
Distilled models
Initial support for https://huggingface.co/distil-whisper is available.
Currently, the chunk-based transcription strategy is not implemented, so there can be sub-optimal quality when using the distilled models with whisper.cpp
.
# clone OpenAI whisper and whisper.cpp
git clone https://github.com/openai/whisper
git clone https://github.com/ggerganov/whisper.cpp
# get the models
cd whisper.cpp/models
git clone https://huggingface.co/distil-whisper/distil-medium.en
git clone https://huggingface.co/distil-whisper/distil-large-v2
# convert to ggml
python3 ./convert-h5-to-ggml.py ./distil-medium.en/ ../../whisper .
mv ggml-model.bin ggml-medium.en-distil.bin
python3 ./convert-h5-to-ggml.py ./distil-large-v2/ ../../whisper .
mv ggml-model.bin ggml-large-v2-distil.bin