mirror of
https://github.com/ggerganov/whisper.cpp.git
synced 2025-01-03 20:48:59 +01:00
4a0deb8b1e
* talk-llama : talk with LLaMA AI * talk.llama : disable EOS token * talk-llama : add README instructions * ggml : fix build in debug |
||
---|---|---|
.. | ||
.gitignore | ||
CMakeLists.txt | ||
llama.cpp | ||
llama.h | ||
README.md | ||
speak.sh | ||
talk-llama.cpp |
talk-llama
Talk with an LLaMA AI in your terminal
Building
The talk-llama
tool depends on SDL2 library to capture audio from the microphone. You can build it like this:
# Install SDL2 on Linux
sudo apt-get install libsdl2-dev
# Install SDL2 on Mac OS
brew install sdl2
# Build the "talk-llama" executable
make talk-llama
# Run it
./talk-llama -mw ./models/ggml-small.en.bin -ml ../llama.cpp/models/13B/ggml-model-q4_0.bin -p "Georgi" -t 8
- The
-mw
argument specifies the Whisper model that you would like to use. Recommendedbase
orsmall
for real-time experience - The
-ml
argument specifies the LLaMA model that you would like to use. Read the instructions in https://github.com/ggerganov/llama.cpp for information about how to obtain aggml
compatible LLaMA model
TTS
For best experience, this example needs a TTS tool to convert the generated text responses to voice.
You can use any TTS engine that you would like - simply edit the speak.sh script to your needs.
By default, it is configured to use MacOS's say
, but you can use whatever you wish.