1
0
mirror of https://github.com/ggerganov/whisper.cpp.git synced 2025-02-15 17:59:14 +01:00
whisper.cpp/ggml/src
2024-11-15 15:21:04 +02:00
..
ggml-amx ggml : add AMX backend (llama/8998) 2024-11-01 10:19:05 +02:00
ggml-cann cann: fix crash when llama-bench is running on multiple cann devices (llama/9627) 2024-10-03 12:22:17 +03:00
ggml-cuda CUDA: fix MMQ for non-contiguous src0, add tests (llama/10021) 2024-11-01 10:19:05 +02:00
ggml-sycl fix mul_mat_vec_q and *_vec_q error (llama/9939) 2024-11-01 10:19:05 +02:00
kompute-shaders whisper : reorganize source code + improve CMake () 2024-06-26 19:34:09 +03:00
vulkan-shaders ggml: Add POOL2D OP for GPU acceleration to the Vulkan backend in the MobileVLM model. (llama/9763) 2024-11-15 15:21:04 +02:00
CMakeLists.txt cmake : make it possible linking ggml as external lib (ggml/1003) 2024-11-15 15:21:04 +02:00
ggml-aarch64.c ggml : add run-time detection of neon, i8mm and sve (llama/9331) 2024-10-03 12:22:17 +03:00
ggml-aarch64.h
ggml-alloc.c ggml : move more prints to the ggml log system (llama/9839) 2024-11-01 10:19:05 +02:00
ggml-amx.cpp llama : refactor model loader with backend registry (llama/10026) 2024-11-15 15:21:04 +02:00
ggml-backend-impl.h llama : refactor model loader with backend registry (llama/10026) 2024-11-15 15:21:04 +02:00
ggml-backend.cpp llama : refactor model loader with backend registry (llama/10026) 2024-11-15 15:21:04 +02:00
ggml-blas.cpp llama : refactor model loader with backend registry (llama/10026) 2024-11-15 15:21:04 +02:00
ggml-cann.cpp llama : refactor model loader with backend registry (llama/10026) 2024-11-15 15:21:04 +02:00
ggml-common.h
ggml-cpu-impl.h ggml : add ggml-cpu-impl.h (skip) () 2024-09-24 19:45:08 +03:00
ggml-cuda.cu llama : refactor model loader with backend registry (llama/10026) 2024-11-15 15:21:04 +02:00
ggml-impl.h fix: use vm_allocate to allocate CPU backend buffer on macOS (llama/9875) 2024-11-01 10:19:05 +02:00
ggml-kompute.cpp llama : refactor model loader with backend registry (llama/10026) 2024-11-15 15:21:04 +02:00
ggml-metal.m llama : refactor model loader with backend registry (llama/10026) 2024-11-15 15:21:04 +02:00
ggml-metal.metal metal : support permuted matrix multiplicaions (llama/10033) 2024-11-01 10:19:05 +02:00
ggml-quants.c ggml : add run-time detection of neon, i8mm and sve (llama/9331) 2024-10-03 12:22:17 +03:00
ggml-quants.h ggml : add run-time detection of neon, i8mm and sve (llama/9331) 2024-10-03 12:22:17 +03:00
ggml-rpc.cpp llama : refactor model loader with backend registry (llama/10026) 2024-11-15 15:21:04 +02:00
ggml-sycl.cpp llama : refactor model loader with backend registry (llama/10026) 2024-11-15 15:21:04 +02:00
ggml-vulkan.cpp llama : refactor model loader with backend registry (llama/10026) 2024-11-15 15:21:04 +02:00
ggml.c llama : refactor model loader with backend registry (llama/10026) 2024-11-15 15:21:04 +02:00
sgemm.cpp
sgemm.h