Commit Graph

3027 Commits

Author SHA1 Message Date
David Zhao
457eadfe6f cuda: refactored ssm_scan and use CUB (llama/13291)
* cuda: refactored ssm_scan to use CUB

* fixed compilation error when when not using CUB

* assign L to constant and use size_t instead of int

* deduplicated functions

* change min blocks per mp to 1

* Use cub load and store warp transpose

* suppress clang warning
2025-08-18 20:30:45 +03:00
Aman Gupta
93c7a08019 CUDA: add attention sinks for tile and wmma (llama/15178)
* CUDA: add attention sinks for tile and wmma

* Review: formatting changes + remove syncthreads from tile + remove warp_reduce_max from wmma
2025-08-18 20:30:45 +03:00
compilade
62566a5436 gguf-py : add Numpy MXFP4 de/quantization support (llama/15111)
* gguf-py : add MXFP4 de/quantization support

* ggml-quants : handle zero amax for MXFP4
2025-08-18 20:30:45 +03:00
AN Long
573bf9d128 ggml : fix field name when new ggml_backend (llama/14944) 2025-08-18 20:30:45 +03:00
Johannes Gäßler
2baea5e4b3 CUDA: attention sinks for mma FlashAttention (llama/15157) 2025-08-18 20:30:45 +03:00
lhez
8a36cd924a opencl: support sink in soft_max (attn sinks) (llama/15152) 2025-08-18 20:30:45 +03:00
Jeff Bolz
1984530710 vulkan: support fattn sinks (llama/15126) 2025-08-18 20:30:45 +03:00
Jeff Bolz
414e9074e0 vulkan: Add env var to disable host visible vidmem (llama/15109) 2025-08-18 20:30:45 +03:00
uvos
813ceb2a74 HIP: add cmake option to enable compiler output of kernel resource usage metrics (llama/15103) 2025-08-18 20:30:45 +03:00
Christian Kastner
6d7ffea292 ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (llama/15094)
Any available libraries are found and loaded dynamically at runtime.
2025-08-18 20:30:45 +03:00
Johannes Gäßler
5caf8a1ea2 CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16 (llama/15131)
* CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16
2025-08-18 20:30:45 +03:00
rmatif
b405fd88b3 fix profiling crash (llama/15072) 2025-08-18 20:30:45 +03:00
lhez
d153cfb507 opencl: add swiglu_oai and add_id (llama/15121)
* opencl: add `swiglu-oai`

* opencl: add `add_id`

* opencl: add missing `add_id.cl`
2025-08-18 20:30:45 +03:00
Diego Devesa
6fb55d8f7c ggml : fix fallback to CPU for ununsupported ops (llama/15118) 2025-08-18 20:30:45 +03:00
Chenguang Li
e809e81e69 CANN: add support for ACL Graph (llama/15065)
* feat(cann): add optional support for ACL Graph execution

This commit adds support for executing ggml computational graphs using
Huawei's ACL graph mode via the USE_CANN_GRAPH flag. The support can be
enabled at compile time using the CMake option:

    -DUSE_CANN_GRAPH=ON

By default, ACL graph execution is **disabled**, and the fallback path
uses node-by-node execution.

Key additions:
- CMake option  to toggle graph mode
- Graph capture and execution logic using
- Tensor property matching to determine whether graph update is required
- Safe fallback and logging if the environment variable LLAMA_SET_ROWS
  is unset or invalid

This prepares the backend for performance improvements in repetitive graph
execution scenarios on Ascend devices.

Signed-off-by: noemotiovon <757486878@qq.com>

* Fix review comments

Signed-off-by: noemotiovon <757486878@qq.com>

* remane USE_CANN_GRAPH to USE_ACL_GRAPH

Signed-off-by: noemotiovon <757486878@qq.com>

* fix typo

Signed-off-by: noemotiovon <757486878@qq.com>

---------

Signed-off-by: noemotiovon <757486878@qq.com>
2025-08-18 20:30:45 +03:00
Georgi Gerganov
d3aab3efde llama : add gpt-oss (llama/15091)
* oai moe

* compat with new checkpoint

* add attn sink impl

* add rope scaling yarn

* logits match with latest transformers code

* wip chat template

* rm trailing space

* use ggml_scale_bias

* rm redundant is_swa_all

* convert interleaved gate_up

* graph : fix activation function to match reference (llama/7)

* vocab : handle o200k_harmony special tokens

* ggml : add attention sinks support (llama/1)

* llama : add attn sinks

* ggml : add attn sinks

* cuda : add attn sinks

* vulkan : add support for sinks in softmax

remove unnecessary return

* ggml : add fused swiglu_oai op (llama/11)

* ggml : add fused swiglu_oai op

* Update ggml/src/ggml-cpu/ops.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* update CUDA impl

* cont : metal impl

* add vulkan impl

* test-backend-ops : more test cases, clean up

* llama : remove unfused impl

* remove extra lines

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: slaren <slarengh@gmail.com>

* repack mxfp4 upon conversion

* clean up a bit

* enable thinking

* add quick hack to render only some special tokens

* fix bf16 conversion

* remove vocab hack

* webui ok

* support chat parsing for gpt-oss

* fix webui

* direct mapping mxfp4, FINALLY

* force using mxfp4

* properly use lazy tensor

* ggml : add mxfp4

ggml : use e8m0 conversion instead of powf

Co-authored-by: Diego Devesa <slarengh@gmail.com>

change kvalues_mxfp4 table to match e2m1 (llama/6)

metal : remove quantization for now (not used)

cuda : fix disabled CUDA graphs due to ffn moe bias

vulkan : add support for mxfp4

cont : add cm2 dequant

* ggml : add ggml_add_id (llama/13)

* ggml : add ggml_add_id

* add cuda impl

* llama : add weight support check for add_id

* perf opt

* add vulkan impl

* rename cuda files

* add metal impl

* allow in-place ggml_add_id

* llama : keep biases on CPU with --cpu-moe

* llama : fix compile error

ggml-ci

* cuda : add fallback for __nv_cvt_e8m0_to_bf16raw

ggml-ci

* cleanup

ggml-ci

* sycl : fix supports_op for MXFP4

ggml-ci

* fix Unknown reasoning format

* ggml-cpu : fix AVX build

ggml-ci

* fix hip build

ggml-ci

* cuda : add mxfp4 dequantization support for cuBLAS

ggml-ci

* ggml-cpu : fix mxfp4 fallback definitions for some architectures

ggml-ci

* cuda : fix version required for __nv_cvt_e8m0_to_bf16raw

---------

Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
Co-authored-by: slaren <slarengh@gmail.com>
2025-08-18 20:30:45 +03:00
Romain Biessy
6558022873 sycl: fix mul_mat selection (llama/15092) 2025-08-18 20:30:45 +03:00
Christian Kastner
349b9a2097 cmake: Add GGML_BACKEND_DIR option (llama/15074)
* cmake: Add GGML_BACKEND_DIR option

This can be used by distributions to specify where to look for backends
when ggml is built with GGML_BACKEND_DL=ON.

* Fix phrasing
2025-08-18 20:30:45 +03:00
Jeff Bolz
00ff38376a vulkan: fix build when using glslang that does not support coopmat2 (llama/15062) 2025-08-18 20:30:45 +03:00
Jeff Bolz
abc971e69a vulkan: Use coopmat2 for conv2d (llama/14982) 2025-08-18 20:30:45 +03:00
lhez
53d8c5179f opencl: fix adreno compiler detection logic (llama/15029) 2025-08-18 20:30:45 +03:00
Johannes Gäßler
d6e7315717 CUDA: use mma FA kernel for gqa > 4 on RTX 4000 (llama/15035) 2025-08-18 20:30:45 +03:00
leejet
a3123e105b cuda: make im2col a little faster (llama/15025) 2025-08-18 20:30:45 +03:00
Georgi Gerganov
d119ecf0c1 cuda, sycl : fix batched gemm when ne02 == 1 && ne03 > 1 (llama/15038)
* cuda, sycl : fix batched gemm when ne02 == 1 && ne03 > 1

ggml-ci

* cont : fix cont types

ggml-ci

* cont : adopt variable names and comment from the other branch
2025-08-18 20:30:45 +03:00
Jeff Bolz
b374fd6172 vulkan: coopmat2 mul_mat optimizations (llama/14934)
- Increase tile size for k-quants, to match non-k-quants
- Choose more carefully between large and medium tiles, considering how it
  interacts with split_k
- Allow larger/non-power of two split_k, and make the splits a multiple of 256
- Use split_k==3 to when >1/2 and <=2/3 of the SMs would hae been used
2025-08-18 20:30:45 +03:00
Jeff Bolz
97341224b2 vulkan: Support ne[3]>1 in noncontig matrix-vector multiply (llama/15015) 2025-08-18 20:30:45 +03:00
Jeff Bolz
46e9e5b9a7 vulkan: optimizations for direct convolution (llama/14933)
* vulkan: optimizations for direct convolution

- Empirically choose a better tile size. Reducing BS_K/BS_NPQ helps fill
  the GPU. The new size should be amenable to using coopmat, too.
- Fix shmem bank conflicts. 16B padding should work with coopmat.
- Some explicit loop unrolling.
- Skip math/stores work for parts of the tile that are OOB.
- Apply fastdiv opt.
- Disable shuffles for NV.

* Three tiles sizes for CONV_2D, and a heuristic to choose

* reallow collectives for pre-Turing

* make SHMEM_PAD a spec constant

* fixes for intel perf - no shmem padding, placeholder shader core count

* shader variants with/without unrolling

* 0cc4m's fixes for AMD perf

Co-authored-by: 0cc4m <picard12@live.de>

---------

Co-authored-by: 0cc4m <picard12@live.de>
2025-08-18 20:30:45 +03:00
Johannes Gäßler
7e7557ac50 CUDA: fix MMQ nwarps for AMD with warp_size==32 (llama/15014) 2025-08-18 20:30:45 +03:00
lhez
ba6a81c9c9 opencl: add f16 for add, sub, mul, div (llama/14984) 2025-08-18 20:30:45 +03:00
Srihari-mcw
1c6cb7df47 ggml : Q2k interleaving implementation - x86/x64 SIMD (llama/14373)
* Initial Q2_K Block Interleaving Implementation

* Addressed review comments and clean up of the code

* Post rebase fixes

* Initial CI/CD fixes

* Update declarations in arch-fallback.h

* Changes for GEMV Q2_K in arch-fallback.h

* Enable repacking only on AVX-512 machines

* Update comments in repack.cpp

* Address q2k comments

---------

Co-authored-by: Manogna-Sree <elisetti.manognasree@multicorewareinc.com>
2025-08-18 20:30:45 +03:00
diannao
78668cb8d1 docker : add cann build pipline (llama/14591)
* docker: add cann build pipline

* docker: add cann build pipline

* docker: fix cann devops

* cann : fix multi card hccl

* Update ggml/src/ggml-cann/ggml-cann.cpp

Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com>

* Update ggml-cann.cpp

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com>
2025-08-18 20:30:45 +03:00
Ruben Ortlam
41e161657e Vulkan: Fix minor debug mode issues (llama/14899)
* vulkan: fix debug mode issues

* vulkan: remove broken check_results GGML_OP_SET_ROWS support
2025-08-18 20:30:45 +03:00
hipudding
572152d6af CANN: Improve loading efficiency after converting weights to NZ format. (llama/14985)
* CANN: Improve loading efficiency after converting weights to NZ format.

* CANN: fix typo
2025-08-18 20:30:45 +03:00
lhez
4904bc3bda opencl: add mul_mat_f32_f32_l4_lm and mul_mat_f16_f32_l4_lm (llama/14809) 2025-08-18 20:30:45 +03:00
uvos
8ed27b407d HIP: enable mfma mmq on gfx908 and gfx90a for select datatypes and shapes (llama/14949) 2025-08-18 20:30:45 +03:00
Johannes Gäßler
113d88686b CUDA: skip masked KV slices for all FA kernels (llama/14924) 2025-08-18 20:30:45 +03:00
uvos
4e624e42fa HIP: remove the use of __HIP_PLATFORM_AMD__, explicitly support only AMD targets (llama/14945) 2025-08-18 20:30:45 +03:00
uvos
7f203f41aa HIP: add GGML_HIP_MMQ_MFMA option to allow disableing the MFMA path. (llama/14930)
This is useful for testing for regressions on GCN with CDNA hardware.

With GGML_HIP_MMQ_MFMA=Off and GGML_CUDA_FORCE_MMQ=On we can conveniently test the GCN code path on CDNA. As CDNA is just GCN renamed with MFMA added and limited use ACC registers, this provides a good alternative for regression testing when GCN hardware is not available.
2025-08-18 20:30:45 +03:00
uvos
a3899e78af HIP: Ignore unsupported unroll transformation in fattn-vec (llama/14931)
llvm with the amdgcn target dose not support unrolling loops with conditional break statements, when those statements can not be resolved at compile time. Similar to other places in GGML lets simply ignore this warning.
2025-08-18 20:30:45 +03:00
hipudding
c42e55e054 CANN: Add ggml_set_rows (llama/14943) 2025-08-18 20:30:45 +03:00
Sigbjørn Skjæret
682d659416 cuda : add softcap fusion (llama/14907) 2025-08-18 20:30:45 +03:00
Aman Gupta
577f47111e CUDA: add roll (llama/14919)
* CUDA: add roll

* Make everything const, use __restrict__
2025-08-18 20:30:45 +03:00
xctan
4dca34a4de ggml-cpu : deduplicate scalar implementations (llama/14897)
* remove redundant code in riscv

* remove redundant code in arm

* remove redundant code in loongarch

* remove redundant code in ppc

* remove redundant code in s390

* remove redundant code in wasm

* remove redundant code in x86

* remove fallback headers

* fix x86 ggml_vec_dot_q8_0_q8_0
2025-08-18 20:30:45 +03:00
Akarshan Biswas
4908e9dd05 SYCL: Add set_rows support for quantized types (llama/14883)
* SYCL: Add set_rows support for quantized types

This commit adds support for GGML_OP_SET_ROWS operation for various
quantized tensor types (Q8_0, Q5_1, Q5_0, Q4_1, Q4_0, IQ4_NL) and BF16
type in the SYCL backend.

The quantization/dequantization copy kernels were moved from cpy.cpp
to cpy.hpp to make them available for set_rows.cpp.

This addresses part of the TODOs mentioned in the code.

* Use get_global_linear_id() instead

ggml-ci

* Fix formatting

ggml-ci

* Use const for ne11 and size_t variables in set_rows_sycl_q

ggml-ci

* Increase block size for q kernel to 256

ggml-ci

* Cleanup imports

* Add float.h to cpy.hpp
2025-08-18 20:30:45 +03:00
Johannes Gäßler
24d3524bfd CUDA: fix pointer incrementation in FA (llama/14916) 2025-08-18 20:30:45 +03:00
Alberto Cabrera Pérez
923619ffd5 sycl: refactor quantization to q8_1 (llama/14815)
* sycl: quantization to q8_1 refactor

* Refactored src1 copy logic in op_mul_mat
2025-08-18 20:30:45 +03:00
Kai Pastor
45784c05ae cmake : Fix BLAS link interface (ggml/1316) 2025-08-18 20:30:45 +03:00
Kai Pastor
01bdc522e0 vulkan : fix 32-bit builds (ggml/1313)
The pipeline member can be cast to VkPipeline.
This is a VkPipeline_T* on 64 bit but a uint64_t on 32 bit.
Cf. VK_DEFINE_NON_DISPATCHABLE_HANDLE documentation.
2025-08-18 20:30:45 +03:00
Georgi Gerganov
9446500b9d scripts : update sync scripts 2025-08-18 20:30:45 +03:00
Daniel Bevenius
040510a132 node : add win platform check for require path (#3363)
This commit adds a check to the platform in use and adjust the path to
the addon.node shared library.

The motivation for this change is that on windows addon.node library is
built into build\bin\Release and on linux into build/Release.

Resolves: https://github.com/ggml-org/whisper.cpp/issues/3360
2025-08-15 14:54:23 +02:00