whisper.cpp/ggml
agray3 f0a7d65b3d Update CUDA graph on scale change plus clear nodes/params (llama/9550)
* Avoid using saved CUDA graph if scale changes and reset nodes/params on update

Fixes https://github.com/ggerganov/llama.cpp/issues/9451

* clear before resize
2024-09-24 19:45:08 +03:00
..
cmake whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
include examples : adapt to ggml.h changes (ggml/0) 2024-09-24 19:45:08 +03:00
src Update CUDA graph on scale change plus clear nodes/params (llama/9550) 2024-09-24 19:45:08 +03:00
.gitignore whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
CMakeLists.txt cmake : do not hide GGML options + rename option (llama/9465) 2024-09-24 19:45:08 +03:00
ggml_vk_generate_shaders.py whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00