mirror of
https://github.com/ggerganov/whisper.cpp.git
synced 2024-12-29 10:09:04 +01:00
5e6e2187a3
"-ml" instead of "-mg" for specifying the llama file |
||
---|---|---|
.. | ||
prompts | ||
.gitignore | ||
CMakeLists.txt | ||
llama.cpp | ||
llama.h | ||
README.md | ||
speak.sh | ||
talk-llama.cpp |
talk-llama
Talk with an LLaMA AI in your terminal
Building
The talk-llama
tool depends on SDL2 library to capture audio from the microphone. You can build it like this:
# Install SDL2 on Linux
sudo apt-get install libsdl2-dev
# Install SDL2 on Mac OS
brew install sdl2
# Build the "talk-llama" executable
make talk-llama
# Run it
./talk-llama -mw ./models/ggml-small.en.bin -ml ../llama.cpp/models/13B/ggml-model-q4_0.bin -p "Georgi" -t 8
- The
-mw
argument specifies the Whisper model that you would like to use. Recommendedbase
orsmall
for real-time experience - The
-ml
argument specifies the LLaMA model that you would like to use. Read the instructions in https://github.com/ggerganov/llama.cpp for information about how to obtain aggml
compatible LLaMA model
TTS
For best experience, this example needs a TTS tool to convert the generated text responses to voice.
You can use any TTS engine that you would like - simply edit the speak.sh script to your needs.
By default, it is configured to use MacOS's say
, but you can use whatever you wish.
Discussion
If you have any feedback, please let "us" know in the following discussion: https://github.com/ggerganov/whisper.cpp/discussions/672?converting=1