* ggml : CLBlast support as in llama.cpp
Building with CLBlast speeds up whisper.cpp ~2x on low end / older AMD APUs (CPU with integrated GPU) such as the A9.
Usage:
WHISPER_CLBLAST=1 make
* CMake/Makefile : CLBlast support as in llama.cpp
Building with CLBlast speeds up whisper.cpp ~2x on low end / older AMD APUs (CPU with integrated GPU) such as the A9.
Usage:
```
Makefile:
cd whisper.cpp
WHISPER_CLBLAST=1 make
CMake:
cd whisper.cpp ; mkdir build ; cd build
cmake -DWHISPER_CLBLAST=ON ..
make
```
* Update README.md
Added OpenCL Build Instructions
* Instruction: Partial OpenCL GPU support via CLBlast
Added build instructions and examples for Make and CMake to support OpenCL enabled GPUs.
* ggml : CLBlast support as in llama.cpp
Building with CLBlast speeds up whisper.cpp ~2x on low end / older AMD APUs (CPU with integrated GPU) such as the A9.
Usage:
WHISPER_CLBLAST=1 make
* CMake/Makefile : CLBlast support as in llama.cpp
Building with CLBlast speeds up whisper.cpp ~2x on low end / older AMD APUs (CPU with integrated GPU) such as the A9.
Usage:
```
Makefile:
cd whisper.cpp
WHISPER_CLBLAST=1 make
CMake:
cd whisper.cpp ; mkdir build ; cd build
cmake -DWHISPER_CLBLAST=ON ..
make
```
* add detectlanguage flag
* renaming and help
* no idea why that last one didn't commit
* run language detection if dl is set
* help message fix
* various fixes
* fix quitting
* fix language being english on print
* feat: adding session support
* readme: adding --session info in examples/talk-llama
* llama: adding session fixes
* readme: updating session doc
* talk-llama: update the value of need_to_save_session to true in order to save the session in the subsequent interaction
* talk-llama: adding missing function which updates session_tokens
Whenever an `offset_ms` is provided, the value of `seek_end` is
calculated incorrectly. This causes Whisper to keep transcribing
after the end of the file.
The current behavior looks like
```
[00:34:40.000 --> 00:34:47.000] This is an example audio file.
[00:34:47.000 --> 00:34:49.000] The text has been redacted
[00:34:49.000 --> 00:34:51.000] This is the end of the audio.
[00:34:51.000 --> 00:34:52.000] ***
[00:34:52.000 --> 00:34:53.000] ***
[00:34:53.000 --> 00:34:54.000] ***
[00:34:55.000 --> 00:34:56.000] ***
...
```
The expected behavior should be
```
[00:34:40.000 --> 00:34:47.000] This is an example audio file.
[00:34:47.000 --> 00:34:49.000] The text has been redacted
[00:34:49.000 --> 00:34:51.000] This is the end of the audio.
- end of program -
```
This commit changes the calculation of the `seek_end` variable to
only add `seek_start` if a custom `duration_ms` is provided.
Otherwise, it defaults to the end of the file.
Signed-off-by: Thijs Raymakers <thijs@raymakers.nl>
if the Core ML model cannot be loaded, continue without Core ML instead of
returning. This allows a single build to transcribe using Core ML models
where available, and regular models when not.
Updated the escape_double_quotes() function such that the function now escapes both double quotes and backslashes in the input string.
Changes Made:
- Renamed the function to escape_quotes_and_backslashes
- Modified the condition in the first loop to increment the value of 'escaped_length' for both double quotes and backslashes.
- Modified the condition in second loop to add a backslash before the current character if it is a double quote or a backslash.
Resolves: #769
I disabled this because there were many complaints about slow decoding.
The current implementation does not allow batching the decoders when
using the "best of" or "beam size" parameters, so the decoding time is
proportional to the number of decoders, which is obviously not great.
However, now there are even more complaints about wrong decodings and
repetition.
So, making a compromise by re-enabling the fallbacks, but defaulting to
just 2 "best of" / "beam size" decoders. Also, the temperature step is
increased from 0.2 to 0.4 - i.e. from maximum of 5 fallbacks to maximum
of 2.
Also, the stream example now has fallbacks enabled by default.
close#471#477#508#612#719#731