whisper.cpp/examples/command.wasm
Daniel Bevenius b6f3fa4059
stream.wasm : add HEAPU8 to exported runtime methods (#3130)
* stream.wasm : add HEAPU8 to exported runtime methods

This commit adds HEAPU8 to the list of exported methods for stream.wasm.

The motivation for this is that without it HEAPUD8 will be undefined
and when its 'buffer' attribute is accessed this will cause error as
reported in the referenced issue.

Note that to test this make sure that the web browsers caches is cleared
first.

Resolves: https://github.com/ggml-org/whisper.cpp/issues/3123

* command.wasm : add HEAPU8 to exported runtime methods
2025-05-08 16:58:34 +02:00
..
CMakeLists.txt stream.wasm : add HEAPU8 to exported runtime methods (#3130) 2025-05-08 16:58:34 +02:00
emscripten.cpp whisper : add context param to disable gpu (#1293) 2023-11-06 11:04:24 +02:00
index-tmpl.html ci : add github pages workflow for wasm examples (#2969) 2025-03-31 11:34:40 +02:00
README.md examples : update README links to point to pages deployment (#2971) 2025-03-31 12:32:27 +02:00

command.wasm

This is a basic Voice Assistant example that accepts voice commands from the microphone. It runs in fully in the browser via WebAseembly.

Online demo: https://ggerganov.github.io/whisper.cpp/command.wasm

Terminal version: examples/command

Build instructions

# build using Emscripten (v3.1.2)
git clone https://github.com/ggerganov/whisper.cpp
cd whisper.cpp
mkdir build-em && cd build-em
emcmake cmake ..
make -j libcommand

The example can then be started by running a local HTTP server:

python3 examples/server.py

And then opening a browser to the following URL: http://localhost:8000/command.wasm/

To run the example in a different server, you need to copy the following files to the server's HTTP path:

cp bin/command.wasm/*       /path/to/html/
cp bin/libcommand.worker.js /path/to/html/