whisper.cpp/examples/stream
Georgi Gerganov 8de452c18b
Improve decoding (#291)
* whisper : prepare infra for new decoding strategies

* whisper : apply logit filters and compute logprobs

* whisper : add whisper_get_logits()

* whisper : separate self and cross attention memory

Initial step needed for supporting parallel decoders

* whisper : move probs_id buffer to whisper_context

* whisper : refactor kv cache into separate struct

* whisper : move self-attention kv cache to whisper_decoder

* whisper : wip decoding parameters + strategies

* whisper : wip decoding parameters + strategies (part 2)

* whisper : wip decoding parameters + strategies (part 3)

* whisper : wip decoding parameters + strategies (part 4)

* whisper : fix prompt_past update to not include prompt_init

* whisper : temperature + best_of support

* whisper : support for compression_ration_threshold

We actually use entropy, but it is similar

* command : fix example to use logits instead of obsolete probs

* whisper : handle empty sequence ranking

* whisper : add WHISPER_DEBUG + diagnostic prints + new main args

* whisper : minor fixes

* whisper : add beam-search support

* whisper : bug fix when there no previous context

* whisper : add comments

* stream : disable temperature fallback

For real-time processing, we always want a single decoder running at T=0

* whisper.swiftui : update example - fix paths + add empty folders
2023-01-15 11:29:57 +02:00
..
CMakeLists.txt cmake : update to 3.19 (#351) 2023-01-05 21:22:48 +02:00
README.md stream : update README.md + comments 2022-12-16 18:04:19 +02:00
stream.cpp Improve decoding (#291) 2023-01-15 11:29:57 +02:00

stream

This is a naive example of performing real-time inference on audio from your microphone. The stream tool samples the audio every half a second and runs the transcription continously. More info is available in issue #10.

./stream -m ./models/ggml-base.en.bin -t 8 --step 500 --length 5000

https://user-images.githubusercontent.com/1991296/194935793-76afede7-cfa8-48d8-a80f-28ba83be7d09.mp4

Sliding window mode with VAD

Setting the --step argument to 0 enables the sliding window mode:

 ./stream -m ./models/ggml-small.en.bin -t 6 --step 0 --length 30000 -vth 0.6

In this mode, the tool will transcribe only after some speech activity is detected. A very basic VAD detector is used, but in theory a more sophisticated approach can be added. The -vth argument determines the VAD threshold - higher values will make it detect silence more often. It's best to tune it to the specific use case, but a value around 0.6 should be OK in general. When silence is detected, it will transcribe the last --length milliseconds of audio and output a transcription block that is suitable for parsing.

Building

The stream tool depends on SDL2 library to capture audio from the microphone. You can build it like this:

# Install SDL2 on Linux
sudo apt-get install libsdl2-dev

# Install SDL2 on Mac OS
brew install sdl2

make stream

Web version

This tool can also run in the browser: examples/stream.wasm