whisper.cpp/ggml/src
Akarshan Biswas 2e2f0f954b SYCL: Remove misleading ggml_sycl_op_flatten function (llama/12387)
* SYCL: Remove misleading ggml_sycl_op_flatten function

* remove trailing whitespace

* Fix L2 norm from rebase

* remove try catch block from element_wise.cpp

* remove comment from common.hp

* ggml-sycl.cpp: Add try catch sycl::exception block in compute_forward

* norm.cpp: remove try catch exception block
2025-03-31 14:56:53 +03:00
..
ggml-amx ggml : adapt AMX to tensor->grad removal (llama/0) 2024-11-20 21:00:08 +02:00
ggml-blas ggml : add support for dynamic loading of backends (llama/10469) 2024-12-08 20:14:35 +02:00
ggml-cann MUL_MAT optimization (llama/12382) 2025-03-27 11:06:03 +02:00
ggml-cpu cpu : rm unused variable (ggml/1166) 2025-03-31 14:56:53 +03:00
ggml-cuda musa: fix all warnings, re-enable -DLLAMA_FATAL_WARNINGS=ON in ci and update doc (llama/12611) 2025-03-31 14:56:53 +03:00
ggml-hip HIP: implement FlashAttention via rocWMMA for CDNA and RDNA3+ (llama/12032) 2025-03-08 15:13:01 +02:00
ggml-kompute llama : add Qwen2VL support + multimodal RoPE (llama/10361) 2024-12-18 12:52:16 +02:00
ggml-metal metal : use constexpr in FA kernels + fix typedef (llama/12659) 2025-03-31 14:56:53 +03:00
ggml-musa cuda : enable CUDA Graph on CUDA Toolkit < 12.x (llama/12394) 2025-03-27 11:06:03 +02:00
ggml-opencl opencl: add multi and vision rope, gelu_quick and im2col (llama/12600) 2025-03-28 21:47:42 +02:00
ggml-rpc rpc : send hash when tensor data is above some fixed threshold (llama/12496) 2025-03-28 21:47:42 +02:00
ggml-sycl SYCL: Remove misleading ggml_sycl_op_flatten function (llama/12387) 2025-03-31 14:56:53 +03:00
ggml-vulkan cmake: improve Vulkan cooperative matrix support checks (#2966) 2025-03-31 13:44:36 +03:00
CMakeLists.txt cmake : fix ccache conflict (llama/12522) 2025-03-31 14:56:53 +03:00
ggml-alloc.c ggml : upgrade init_tensor API to return a ggml_status (llama/11854) 2025-03-08 15:13:01 +02:00
ggml-backend-impl.h ggml : upgrade init_tensor API to return a ggml_status (llama/11854) 2025-03-08 15:13:01 +02:00
ggml-backend-reg.cpp ggml-backend : fix backend search path (llama/12330) 2025-03-27 11:06:03 +02:00
ggml-backend.cpp ggml : portability fixes for VS 2017 (llama/12150) 2025-03-08 15:13:01 +02:00
ggml-common.h musa: fix all warnings, re-enable -DLLAMA_FATAL_WARNINGS=ON in ci and update doc (llama/12611) 2025-03-31 14:56:53 +03:00
ggml-impl.h ggml : sync/merge cmake,riscv,powerpc, add common.cmake (ggml/0) 2025-03-27 11:06:03 +02:00
ggml-opt.cpp ggml-opt: fix data corruption (ggml/1022) 2024-12-08 20:14:35 +02:00
ggml-quants.c ggml : portability fixes for VS 2017 (llama/12150) 2025-03-08 15:13:01 +02:00
ggml-quants.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-threading.cpp ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-threading.h remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (llama/10797) 2024-12-18 12:52:16 +02:00
ggml.c metal : improve FA + improve MoE (llama/12612) 2025-03-28 21:47:42 +02:00
gguf.cpp cmake : add sanitizer flags for llama.cpp (llama/11279) 2025-02-03 22:00:57 +02:00