This commit enable the node addon to suppress all output, even the
result of the transcription if the no_prints parameter is set to true.
The motivation for this is that for the node addon there is a
fullfilment handler/success callback to process the transcription
result. And it might be useful to be able to disable the printing of
the transcription result to the console, so that the user can handle
the result in their own way.
Refs: https://github.com/ggml-org/whisper.cpp/issues/3176
Quick fix for not removing swedish umlauts.
* Update talk-llama.cpp
Expose model inference settings to user instead of hard coding them. Same defaults as previous defaults.
* Update examples/talk-llama/talk-llama.cpp
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* examples : add --print-confidence option to cli
This commit adds a new command-line option `--print-confidence` to the
whisper-cli. When enabled, this option prints the confidence level of each
token in the transcribed text using ANSI formatting codes.
The confidence levels are represented using different styles:
```console
main: confidence: highlighted (low confidence), underlined (medium), dim (high confidence)
```
Refs: https://github.com/ggml-org/whisper.cpp/issues/3135
This commit adds the `--flash-attn` option to the usage output of the
server example.
The motivation for this change is that while it is possible to set this
option it is not printed in the usage output.
This commit adds an example that demonstrates how to use a VAD (Voice
Activity Detection) model to segment an audio file into speech segments.
Resolves: https://github.com/ggml-org/whisper.cpp/issues/3144
* vad : add initial Voice Activity Detection (VAD) support
This commit add support for Voice Activity Detection (VAD). When enabled
this feature will process the audio input and detect speech segments.
This information is then used to reduce the number of samples that need
to be processed by whisper_full.
Resolves: https://github.com/ggml-org/whisper.cpp/issues/3003
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
This commit adds a description of the color scheme used in the CLI
when the --print-colors option is enabled.
The motivation for this is that it is not immediately clear what the
color scheme is when using the CLI with the --print-colors option.
Example output:
```console
$ ./build/bin/whisper-cli -f samples/jfk.wav --print-colors
...
main: color scheme: red (low confidence), yellow (medium), green (high confidence)
[00:00:00.000 --> 00:00:11.000] And so my fellow Americans, ask not what your country can do for you, ask what you can do for your country.
```
The description will not be dispayed if the `--no-prints` options is
set.
Refs: https://github.com/ggml-org/whisper.cpp/issues/3135
This commit updates the link to Paul Tol's color scheme in the
`examples/common.h` file. The previous link was outdated and
pointed to a non-existent page.
This commit adds HEAPU8 to the list of exported methods.
The motivation for this commit is that currently this is causing an error on Window systems where HEAPU8 in undefined, which results in the following error message in the web console:
main.js:1 Uncaught TypeError:
Cannot read properties of undefined (reading 'buffer') at __emval_get_property
(main.js:1:1363125) at 003a453a:0xc4a47 at 003a453a:0xc51cd at
Object.full_default (eval at craftInvokerFunction (main.js:1:1347011),
<anonymous>:9:10) at whisper.cpp/:647:42
danbev originally fixed this for whisper.wasm, stream.wasm, and command.stream, but the issue still exists on the other examples which I patch in this code.
Resolves: #3059
This commit updates the documentation for the WASM examples to include a
note about the generation of the `worker.js` file. As of Emscripten
3.1.58 (April 2024), separate worker.js files are no longer generated
and the worker is embedded in the main JS file.
The motivation for this change is to inform users about the new behavior
of Emscripten and why the `worker.js` file may not be present.
Refs: https://github.com/ggml-org/whisper.cpp/issues/3123
* stream.wasm : add HEAPU8 to exported runtime methods
This commit adds HEAPU8 to the list of exported methods for stream.wasm.
The motivation for this is that without it HEAPUD8 will be undefined
and when its 'buffer' attribute is accessed this will cause error as
reported in the referenced issue.
Note that to test this make sure that the web browsers caches is cleared
first.
Resolves: https://github.com/ggml-org/whisper.cpp/issues/3123
* command.wasm : add HEAPU8 to exported runtime methods
* ggml : remove MSVC warnings pragmas
This commit removes the MSVC-specific pragmas as these are now handled
in CMakeLists.txt.
* whisper : remove MSVC warning pragmas
This commit removes the MSVC-specific pragmas. These are now handled in
the CMakeLists.txt file.
This changes examples/cli/cli.cpp to be like
examples/common-whisper.cpp. "-of -" can be specified (or this can be
inferred from "-" as the input file) to output to stdout. This is useful
for piping to other applications.
Log fname_out consistently when not stdout
- Terminals have stdout=stderr, so remove the message before
successful output to ease copying
- Don't affect actual error messages
- Move opening the ofstream into the factory, fixing missing
open and/or error messages in output_score/output_wts
- Fix struct naming convention
Closes#3048
* docs : Update cli documentation
This updates the documentation of cli based on the actual output
In the longterm this should ideally be auto generated to prevent mismatch
* docs : Update cli documentation
This updates the documentation of cli based on the actual output
In the longterm this should ideally be auto generated to prevent mismatch
This commit adds the the command line option `--no-gpu` to the server
examples print usage function.
The motivation for this is that this options is available and can be set
but it is not displayed in the usage message.
Refs: https://github.com/ggml-org/whisper.cpp/issues/3095
This commit adds `HEAPU8` to the list of exported methods.
The motivation for this commit is that currently this is causing an
error on Window systems where HEAPU8 in undefined, which results in the
following error message in the web console:
```console
main.js:1 Uncaught TypeError:
Cannot read properties of undefined (reading 'buffer') at __emval_get_property
(main.js:1:1363125) at 003a453a:0xc4a47 at 003a453a:0xc51cd at
Object.full_default (eval at craftInvokerFunction (main.js:1:1347011),
<anonymous>:9:10) at whisper.cpp/:647:42
```
Resolves: https://github.com/ggml-org/whisper.cpp/issues/3059
FFmpeg introduced a new channel layout API that uses `AVChannelLayout`
interface in v6.0. It subsequently dropped the old bitmask-based API
in v7.0.
This updates decode_audio() to support the new channel layout API,
so that we can compile `whisper-cli` and `whisper-server` with FFmpeg
v7.0 or later.
Tested on on Ubuntu 24.10 with FFmpeg v7.0.2.
Signed-off-by: Fujimoto Seiji <fujimoto@ceptord.net>
This commit updates examples/server.py which is used to serve the wasm
examples locally. The changes include:
- Added a redirect from the root URL to /whisper.cpp.
So now accessing http://localhost:8000/ will redirect to
http://localhost:8000/whisper.cpp/ which matches the url for the app
deployed to github pages.
- Custom handling for coi-serviceworker.js to serve it to avoid
and error in the console. This file is not strictly necessary
for the local server to work as the headers are provided already but
it is nice to not have an error in the console.
- Fixed the shutdown of the server to ensure it exits cleanly
on Ctrl+C. Previously it would continue to hang onto the port even
after the processed had exited.
* whisper.wasm : fix unknown language issue
This commit addresses an issue with whisper.wasm where the following
error was being displayed when running the application in github pages:
```
whisper_lang_id: unknown language 'д=␙c'
```
This turned out to be a memory corruption issue and further details
can be found in the reference issue below.
Refs: https://github.com/ggerganov/whisper.cpp/issues/2998
* ci : add github pages workflow for wasm examples
This commit adds a github workflow to build and deploy the wasm examples
to github pages. The whisper.wasm example is deployed as the main page.
This workflow is trigged by a push to master and will deploy the
examples to: https://ggerganov.github.io/whisper.cpp/.
This requires that the repository has enabled github actions in
`Settings` -> `Pages` -> `Build and deployment` -> `Source` be set to
`GitHub Actions`.
One thing to note is that this commit removes the `talk` example as I'm
not sure how this example is built yet.
Refs: https://github.com/ggerganov/whisper.cpp/issues/2784
This commit add GGML_USE_CPU to built target library to enable CPU
backend.
The motivation for this that without the compile definition the CPU
backend is not enabled and the app will crash when trying to use it.
* whisper.android.java : update build with ggml source changes
This commit updates the whisper.android.java build to include the
new ggml source files and directories. The gradle build configuration is
also updated to include the aliyun maven repository.
* examples : reduce initial memory to 512MB
This commit reduces the initial memory size to 512MB. This is done to
to avoid WebAssembly memory allocation issues on some platforms. It also
adds a flag to allow the memory to grow dynamically (up to the maximum).
The motivation for this change is that currently the initial memory is
set to 2GB which might be to large for some platforms. This will lead to
an error being thrown from the JavaScript code generated by Emscripten
when trying to allocate memory. More details can be found in the
referenced issue below.
* examples : set MAXIMUM_MEMORY instead of TOTAL_MEMORY
This commit sets MAXIMUM_MEMORY instead of TOTAL_MEMORY in the
whisper.wasm example.
The motivation for this is that TOTAL_MEMORY and INITIAL_MEMORY are
actually the same thing. Instead we want to set MAXIMUM_MEMORY to
2GB.
Refs: https://github.com/ggerganov/whisper.cpp/issues/2920
Refs: https://emscripten.org/docs/tools_reference/settings_reference.html#initial-memory
This commit fixes the nthread parsing in the whisper.wasm example when
using the `Threads` slider to change the number of threads to be used.
Currently this results in the following error:
```console
main.js:5597 Uncaught TypeError: Cannot convert "5" to int
at checkAssertions (main.js:5597:21)
at Object.toWireType (main.js:5611:15)
at Object.full_default (eval at new_ (main.js:5292:27), <anonymous>:10:26)
at whisper.wasm/:649:42
```