whisper.cpp/extra
Georgi Gerganov 93935980f8
whisper : Metal and ggml-alloc support (#1270)
* metal : init

* whisper : factor out graph builds

* whisper : allocate encoder and decoder using ggml-alloc

* whisper : ggml-alloc is now supported

* whisper : CoreML support ggml-alloc

* build : fix ggml-alloc

* ios : update submodule

* extra : update sync-ggml.sh script to also sync ggml-alloc

* ci : see if this is causing the crash

* whisper : refactor ggml-alloc init

* whisper.android : try to fix build

* whisper : initial Metal version

* ci : try to debug vmem issue

* metal : decoder works on GPU!

* metal : add multi-decoder support

* ggml : fix ggml_nbytes (probably temp solution)

* metal : run "cross" step on the GPU

* whisper : remove ggml_repeat in the encoder

* whisper : offload the Encoder to Metal

* ggml : use simpler ggml_bytes() implementation

* ggml-alloc : try to make CI happy by reducing vram to 128GB

* whisper : add whisper_allocr to wrap ggml_allocr

* whisper : factor out alloc init in a function

* cmake : update to support Metal build

* whisper : add <functional> header

* objc : fix build (no Metal yet)

* ios : add Metal support

* swiftui : fix build

* metal : speed-up KQ multiplication

* metal : sync latest llama.cpp kernels

* readme : add Metal info

* ios : update submodule

* coreml : add code to toggle Core ML config (CPU, ANE, GPU)

* bench : fix timings by running a pre-heat

* bench : start benching the decoder

* whisper : add ggml_mul_mat_pad

* bench : fix uninitialized vars

* whisper : add comment for disabling mul-mat padding

* whisper : add description of ggml_mul_mat_pad

* whisper : clean-up ggml_mul_mat_pad

* metal : remove the "concurrent" flag

* bench : variable n_past

* ios : update SPM package
2023-09-15 12:18:18 +03:00
..
bench-all.sh whisper : Metal and ggml-alloc support (#1270) 2023-09-15 12:18:18 +03:00
bench-wts.sh bench-wts.sh : rename script + add execute permission 2023-03-06 21:02:24 +02:00
convert-all.sh models : add the new "large" model release by OpenAI 2022-12-06 18:48:57 +02:00
deploy-wasm.sh Node.js package (#260) 2022-12-12 20:17:27 +02:00
quantize-all.sh extra : update 'quantize-all.sh' to quantize all downloaded models (#1054) 2023-06-28 22:07:02 +03:00
sha-all.sh extra : compute SHA of all models files 2022-11-02 18:31:55 +02:00
sync-ggml.sh whisper : Metal and ggml-alloc support (#1270) 2023-09-15 12:18:18 +03:00