..
ggml-amx
ggml : adapt AMX to tensor->grad removal (llama/0)
2024-11-20 21:00:08 +02:00
ggml-blas
ggml : add support for dynamic loading of backends (llama/10469)
2024-12-08 20:14:35 +02:00
ggml-cann
CANN: Improve the Inferencing Performance for Ascend NPU Device (llama/10454)
2024-12-08 20:14:35 +02:00
ggml-cpu
ggml-cpu: cmake add arm64 cpu feature check for macos (llama/10487)
2024-12-08 20:14:35 +02:00
ggml-cuda
cmake : enable warnings in llama (llama/10474)
2024-12-08 20:14:35 +02:00
ggml-hip
ggml : add support for dynamic loading of backends (llama/10469)
2024-12-08 20:14:35 +02:00
ggml-kompute
ggml : add support for dynamic loading of backends (llama/10469)
2024-12-08 20:14:35 +02:00
ggml-metal
metal : enable mat-vec kernels for bs <= 4 (llama/10491)
2024-12-08 20:14:35 +02:00
ggml-musa
mtgpu: Add MUSA_DOCKER_ARCH in Dockerfiles && update cmake and make (llama/10516)
2024-12-08 20:14:35 +02:00
ggml-rpc
ggml : add support for dynamic loading of backends (llama/10469)
2024-12-08 20:14:35 +02:00
ggml-sycl
ggml : add support for dynamic loading of backends (llama/10469)
2024-12-08 20:14:35 +02:00
ggml-vulkan
vulkan: Handle GPUs with less shared memory (llama/10468)
2024-12-08 20:14:35 +02:00
CMakeLists.txt
cmake : enable warnings in llama (llama/10474)
2024-12-08 20:14:35 +02:00
ggml-aarch64.c
ggml : optimize Q4_0 into Q4_0_X_Y repack (llama/10324)
2024-11-20 21:00:08 +02:00
ggml-aarch64.h
ggml : build backends as libraries (llama/10256)
2024-11-20 21:00:08 +02:00
ggml-alloc.c
ggml: new optimization interface (ggml/988)
2024-11-20 21:00:08 +02:00
ggml-backend-impl.h
ggml : add support for dynamic loading of backends (llama/10469)
2024-12-08 20:14:35 +02:00
ggml-backend-reg.cpp
llama : accept a list of devices to use to offload a model (llama/10497)
2024-12-08 20:14:35 +02:00
ggml-backend.cpp
ggml-opt: fix data corruption (ggml/1022)
2024-12-08 20:14:35 +02:00
ggml-common.h
ggml-quants : ternary packing for TriLMs and BitNet b1.58 (llama/8151)
2024-09-24 19:45:08 +03:00
ggml-impl.h
Do not include arm_neon.h when compiling CUDA code (ggml/1028)
2024-12-08 20:14:35 +02:00
ggml-opt.cpp
ggml-opt: fix data corruption (ggml/1022)
2024-12-08 20:14:35 +02:00
ggml-quants.c
ggml : build backends as libraries (llama/10256)
2024-11-20 21:00:08 +02:00
ggml-quants.h
ggml : build backends as libraries (llama/10256)
2024-11-20 21:00:08 +02:00
ggml-threading.cpp
ggml : build backends as libraries (llama/10256)
2024-11-20 21:00:08 +02:00
ggml-threading.h
ggml : build backends as libraries (llama/10256)
2024-11-20 21:00:08 +02:00
ggml.c
ggml : add support for dynamic loading of backends (llama/10469)
2024-12-08 20:14:35 +02:00