mirror of
https://github.com/ggerganov/whisper.cpp.git
synced 2025-03-01 16:41:37 +01:00
* whisper : add loader to allow loading from other than file * whisper : rename whisper_init to whisper_init_from_file * whisper : add whisper_init_from_buffer * android : Delete local.properties * android : load models directly from assets * whisper : adding <stddef.h> needed for size_t + code style Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> |
||
---|---|---|
.. | ||
whisper.objc | ||
whisper.objc.xcodeproj | ||
README.md |
whisper.objc
Minimal Obj-C application for automatic offline speech recognition. The inference runs locally, on-device.
https://user-images.githubusercontent.com/1991296/197385372-962a6dea-bca1-4d50-bf96-1d8c27b98c81.mp4
Real-time transcription demo:
https://user-images.githubusercontent.com/1991296/204126266-ce4177c6-6eca-4bd9-bca8-0e46d9da2364.mp4
Usage
git clone https://github.com/ggerganov/whisper.cpp
open whisper.cpp/examples/whisper.objc/whisper.objc.xcodeproj/
Make sure to build the project in Release
:

Also, don't forget to add the -DGGML_USE_ACCELERATE
compiler flag in Build Phases.
This can significantly improve the performance of the transcription:
