whisper.cpp/examples/whisper.wasm
Daniel Bevenius e17af6524f
ci : add github pages workflow for wasm examples (#2969)
* ci : add github pages workflow for wasm examples

This commit adds a github workflow to build and deploy the wasm examples
to github pages. The whisper.wasm example is deployed as the main page.

This workflow is trigged by a push to master and will deploy the
examples to: https://ggerganov.github.io/whisper.cpp/.

This requires that the repository has enabled github actions in
`Settings` -> `Pages` -> `Build and deployment` -> `Source` be set to
`GitHub Actions`.

One thing to note is that this commit removes the `talk` example as I'm
not sure how this example is built yet.

Refs: https://github.com/ggerganov/whisper.cpp/issues/2784
2025-03-31 11:34:40 +02:00
..
CMakeLists.txt examples : reduce initial memory to 512MB (#2939) 2025-03-24 14:42:12 +01:00
emscripten.cpp whisper : add context param to disable gpu (#1293) 2023-11-06 11:04:24 +02:00
index-tmpl.html ci : add github pages workflow for wasm examples (#2969) 2025-03-31 11:34:40 +02:00
README.md examples : update wasm examples to include server.py [no ci] (#2908) 2025-03-20 09:07:43 +01:00

whisper.wasm

Inference of OpenAI's Whisper ASR model inside the browser

This example uses a WebAssembly (WASM) port of the whisper.cpp implementation of the transformer to run the inference inside a web page. The audio data does not leave your computer - it is processed locally on your machine. The performance is not great but you should be able to achieve x2 or x3 real-time for the tiny and base models on a modern CPU and browser (i.e. transcribe a 60 seconds audio in about ~20-30 seconds).

This WASM port utilizes WASM SIMD 128-bit intrinsics so you have to make sure that your browser supports them.

The example is capable of running all models up to size small inclusive. Beyond that, the memory requirements and performance are unsatisfactory. The implementation currently support only the Greedy sampling strategy. Both transcription and translation are supported.

Since the model data is quite big (74MB for the tiny model) you need to manually load the model into the web-page.

The example supports both loading audio from a file and recording audio from the microphone. The maximum length of the audio is limited to 120 seconds.

Live demo

Link: https://whisper.ggerganov.com

image

Build instructions

# build using Emscripten
git clone https://github.com/ggerganov/whisper.cpp
cd whisper.cpp
mkdir build-em && cd build-em
emcmake cmake ..
make -j

The example can then be started by running a local HTTP server:

python3 examples/server.py

And then opening a browser to the following URL: http://localhost:8000/whisper.wasm

To run the example in a different server, you need to copy the following files to the server's HTTP path:

# copy the produced page to your HTTP path
cp bin/whisper.wasm/*    /path/to/html/
cp bin/libmain.worker.js /path/to/html/