mirror of
https://github.com/ggerganov/whisper.cpp.git
synced 2024-12-27 00:59:01 +01:00
Minor
This commit is contained in:
parent
f7ab81fe51
commit
63b6786767
@ -12,7 +12,7 @@ High-performance inference of [OpenAI's Whisper](https://github.com/openai/whisp
|
||||
- Zero memory allocations at runtime
|
||||
- Runs on the CPU
|
||||
- [C-style API](https://github.com/ggerganov/whisper.cpp/blob/master/whisper.h)
|
||||
- Supported platforms: Linux, Mac OS (Intel and Arm), Raspberry Pi, Android
|
||||
- Supported platforms: Linux, Mac OS (Intel and Arm), Windows (MinGW), Raspberry Pi, Android
|
||||
|
||||
## Usage
|
||||
|
||||
@ -248,6 +248,8 @@ The original models are converted to a custom binary format. This allows to pack
|
||||
- vocabulary
|
||||
- weights
|
||||
|
||||
You can download the converted models using the [download-ggml-model.sh](download-ggml-model.sh) script.
|
||||
You can download the converted models using the [download-ggml-model.sh](download-ggml-model.sh) script or from here:
|
||||
|
||||
https://ggml.ggerganov.com
|
||||
|
||||
For more details, see the conversion script [convert-pt-to-ggml.py](convert-pt-to-ggml.py) or the README in [models](models).
|
||||
|
@ -4,7 +4,7 @@ The [original Whisper PyTorch models provided by OpenAI](https://github.com/open
|
||||
have been converted to custom `ggml` format in order to be able to load them in C/C++. The conversion has been performed using the
|
||||
[convert-pt-to-ggml.py](convert-pt-to-ggml.py) script. You can either obtain the original models and generate the `ggml` files
|
||||
yourself using the conversion script, or you can use the [download-ggml-model.sh](download-ggml-model.sh) script to download the
|
||||
already converted models.
|
||||
already converted models from https://ggml.ggerganov.com
|
||||
|
||||
Sample usage:
|
||||
|
||||
|
@ -2387,7 +2387,7 @@ int whisper_full(
|
||||
// print the prompt
|
||||
//printf("\n\n");
|
||||
//for (int i = 0; i < prompt.size(); i++) {
|
||||
// printf("%s: prompt[%d] = %s\n", __func__, i, vocab.id_to_token[prompt[i]].c_str());
|
||||
// printf("%s: prompt[%d] = %s\n", __func__, i, ctx->vocab.id_to_token[prompt[i]].c_str());
|
||||
//}
|
||||
//printf("\n\n");
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user