whisper.cpp/extra
Georgi Gerganov 3a5302108d
sync : ggml (ggml_scale, ggml_row_size, etc.) (#1677)
* sync : ggml

* sync : llama.cpp

* talk-llama : fix obsolete param

* ggml-alloc : fix ggml_tallocr_is_own

* talk.wasm : update to new ggml

* ggml : fix type punning in ggml_scale

* ggml : cuda jetson + arm quants warnings
2023-12-22 17:53:39 +02:00
..
bench-all.sh bench-all : add distil models 2023-11-15 20:49:12 +02:00
bench-wts.sh bench-wts.sh : rename script + add execute permission 2023-03-06 21:02:24 +02:00
bench.py bench.py : add different large models (#1655) 2023-12-19 12:40:14 +02:00
convert-all.sh whisper : make large version explicit + fix data size units (#1493) 2023-11-15 19:42:25 +02:00
deploy-wasm.sh Node.js package (#260) 2022-12-12 20:17:27 +02:00
quantize-all.sh whisper : add full CUDA and Metal offloading (#1472) 2023-11-12 15:31:08 +02:00
sha-all.sh extra : compute SHA of all models files 2022-11-02 18:31:55 +02:00
sync-ggml.sh cuda : fix HIPBLAS build 2023-11-05 19:41:15 +02:00
sync-llama.sh sync : ggml (ggml_scale, ggml_row_size, etc.) (#1677) 2023-12-22 17:53:39 +02:00