whisper.cpp/ggml/src
2024-08-08 22:48:46 +03:00
..
ggml-cuda Allow all RDNA2 archs to use sdot4 intrinsic (llama/8629) 2024-08-08 22:48:46 +03:00
ggml-sycl fix scratch size of softmax (llama/8642) 2024-08-08 22:48:46 +03:00
kompute-shaders whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
vulkan-shaders whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
CMakeLists.txt Re-add erroneously removed -fsycl from GGML_EXTRA_LIBS (llama/8667) 2024-08-08 22:48:46 +03:00
ggml-alloc.c CUDA: fix partial offloading for ne0 % 256 != 0 (llama/8572) 2024-08-08 22:48:46 +03:00
ggml-backend-impl.h whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
ggml-backend.c CUDA: fix partial offloading for ne0 % 256 != 0 (llama/8572) 2024-08-08 22:48:46 +03:00
ggml-blas.cpp ggml : add NVPL BLAS support (ggml/8329) (llama/8425) 2024-08-08 22:48:46 +03:00
ggml-common.h ggml : add AArch64 optimized GEMV and GEMM Q4 kernels (llama/5780) 2024-08-08 22:48:46 +03:00
ggml-cuda.cu CUDA: fix partial offloading for ne0 % 256 != 0 (llama/8572) 2024-08-08 22:48:46 +03:00
ggml-impl.h ggml : add AArch64 optimized GEMV and GEMM Q4 kernels (llama/5780) 2024-08-08 22:48:46 +03:00
ggml-kompute.cpp whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
ggml-metal.m ggml : fix quant dot product with odd number of blocks (llama/8549) 2024-08-08 22:48:46 +03:00
ggml-metal.metal ggml : fix quant dot product with odd number of blocks (llama/8549) 2024-08-08 22:48:46 +03:00
ggml-quants.c ggml: fix compile error for RISC-V (llama/8623) 2024-08-08 22:48:46 +03:00
ggml-quants.h ggml : minor naming changes (llama/8433) 2024-08-08 22:48:46 +03:00
ggml-rpc.cpp whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
ggml-sycl.cpp add concat through dim 1/2 (llama/8483) 2024-08-08 22:48:46 +03:00
ggml-vulkan-shaders.hpp whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
ggml-vulkan.cpp Vulkan IQ4_NL Support (llama/8613) 2024-08-08 22:48:46 +03:00
ggml.c ggml : add and use ggml_cpu_has_llamafile() (llama/8664) 2024-08-08 22:48:46 +03:00
sgemm.cpp whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
sgemm.h whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00