mirror of
https://github.com/ggerganov/whisper.cpp.git
synced 2024-12-26 16:48:50 +01:00
Minor
This commit is contained in:
parent
f7ab81fe51
commit
63b6786767
@ -12,7 +12,7 @@ High-performance inference of [OpenAI's Whisper](https://github.com/openai/whisp
|
||||
- Zero memory allocations at runtime
|
||||
- Runs on the CPU
|
||||
- [C-style API](https://github.com/ggerganov/whisper.cpp/blob/master/whisper.h)
|
||||
- Supported platforms: Linux, Mac OS (Intel and Arm), Raspberry Pi, Android
|
||||
- Supported platforms: Linux, Mac OS (Intel and Arm), Windows (MinGW), Raspberry Pi, Android
|
||||
|
||||
## Usage
|
||||
|
||||
@ -34,7 +34,7 @@ For a quick demo, simply run `make base.en`:
|
||||
|
||||
```java
|
||||
$ make base.en
|
||||
cc -O3 -std=c11 -Wall -Wextra -Wno-unused-parameter -Wno-unused-function -pthread -c ggml.c
|
||||
cc -O3 -std=c11 -Wall -Wextra -Wno-unused-parameter -Wno-unused-function -pthread -c ggml.c
|
||||
c++ -O3 -std=c++11 -Wall -Wextra -Wno-unused-parameter -Wno-unused-function -pthread -c whisper.cpp
|
||||
c++ -O3 -std=c++11 -Wall -Wextra -Wno-unused-parameter -Wno-unused-function -pthread main.cpp whisper.o ggml.o -o main
|
||||
./main -h
|
||||
@ -248,6 +248,8 @@ The original models are converted to a custom binary format. This allows to pack
|
||||
- vocabulary
|
||||
- weights
|
||||
|
||||
You can download the converted models using the [download-ggml-model.sh](download-ggml-model.sh) script.
|
||||
You can download the converted models using the [download-ggml-model.sh](download-ggml-model.sh) script or from here:
|
||||
|
||||
https://ggml.ggerganov.com
|
||||
|
||||
For more details, see the conversion script [convert-pt-to-ggml.py](convert-pt-to-ggml.py) or the README in [models](models).
|
||||
|
@ -4,14 +4,14 @@ The [original Whisper PyTorch models provided by OpenAI](https://github.com/open
|
||||
have been converted to custom `ggml` format in order to be able to load them in C/C++. The conversion has been performed using the
|
||||
[convert-pt-to-ggml.py](convert-pt-to-ggml.py) script. You can either obtain the original models and generate the `ggml` files
|
||||
yourself using the conversion script, or you can use the [download-ggml-model.sh](download-ggml-model.sh) script to download the
|
||||
already converted models.
|
||||
already converted models from https://ggml.ggerganov.com
|
||||
|
||||
Sample usage:
|
||||
|
||||
```java
|
||||
$ ./download-ggml-model.sh base.en
|
||||
Downloading ggml model base.en ...
|
||||
models/ggml-base.en.bin 100%[=============================================>] 141.11M 5.41MB/s in 22s
|
||||
models/ggml-base.en.bin 100%[=============================================>] 141.11M 5.41MB/s in 22s
|
||||
Done! Model 'base.en' saved in 'models/ggml-base.en.bin'
|
||||
You can now use it like this:
|
||||
|
||||
|
@ -2387,7 +2387,7 @@ int whisper_full(
|
||||
// print the prompt
|
||||
//printf("\n\n");
|
||||
//for (int i = 0; i < prompt.size(); i++) {
|
||||
// printf("%s: prompt[%d] = %s\n", __func__, i, vocab.id_to_token[prompt[i]].c_str());
|
||||
// printf("%s: prompt[%d] = %s\n", __func__, i, ctx->vocab.id_to_token[prompt[i]].c_str());
|
||||
//}
|
||||
//printf("\n\n");
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user