Commit Graph

2917 Commits

Author SHA1 Message Date
Jeff Bolz
b33841c453 vulkan: add RTE variants for glu/add/sub/mul/div (llama/14653) 2025-07-20 00:23:50 +03:00
R0CKSTAR
ab79c6c118 cuda: fix build warnings in set-rows.cu (unused variable) (llama/14687)
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-07-20 00:23:50 +03:00
Anton Mitkov
a6b9271c2c sycl: Hotfix for non dnnl codepath (llama/14677) 2025-07-20 00:23:50 +03:00
shalinib-ibm
ded2e3cf6d ggml : refactor llamafile_sgemm PPC code (llama/14673)
Remove un-necessary templates from class definition and packing functions
Reduce deeply nested conditionals, if-else switching in mnapck function
Replace repetitive code with inline functions in Packing functions

2 ~ 7% improvement in Q8 Model
15 ~ 50% improvement in Q4 Model

Signed-off-by: Shalini Salomi Bodapati <Shalini.Salomi.Bodapati@ibm.com>
2025-07-20 00:23:50 +03:00
Akarshan Biswas
ebb0e9d0ed SYCL: use 1D kernel for set_rows (llama/14618)
* SYCL: Use 1D kernel for set_rows

* Remove dangling comment

* Refactor and use ceil_div
2025-07-20 00:23:50 +03:00
Anton Mitkov
24803d62c6 sycl: Batched mulmat rework for oneDNN dispatch (llama/14617) 2025-07-20 00:23:50 +03:00
Sigbjørn Skjæret
0611387d17 cuda : add set rows for bf16 (llama/14664) 2025-07-20 00:23:50 +03:00
Yavor Ivanov
fe33572b22 cuda : add ELU support (llama/14657) 2025-07-20 00:23:50 +03:00
Georgi Gerganov
21308b4e6e ggml : add build-time message to remind about ggml_set_rows (llama/14661)
ggml-ci
2025-07-20 00:23:50 +03:00
Yavor Ivanov
3cad26d807 metal : Add missing unary ops Metal support (llama/14660) 2025-07-20 00:23:50 +03:00
Aman Gupta
66b3a39bdc CUDA: add set rows for f32 and f16 (llama/14551)
* CUDA: add set rows for f32 and f16

* Review: change kernel params, use strides from host

* Use 1-d kernel

* Review: use int64_t for blockDim.x, rename nb->s for clarity
2025-07-20 00:23:50 +03:00
Charles Xu
032697b9a8 whisper: validate get_rows support for cpu extra buffer (#3323) 2025-07-14 15:13:44 +03:00
Greg Sadetsky
a16da91365 examples : update links in wasm examples (#3318)
* fix 404 link

* update link in whisper.wasm example

* update example in command.wasm

* update link in bench.wasm example

* update link in stream.wasm example
2025-07-12 23:22:35 +02:00
Georgi Gerganov
3775c503d5 sync : resolve conflicts (#0)
ggml-ci
2025-07-12 19:23:56 +03:00
Georgi Gerganov
6ddff4d96a talk-llama : sync llama.cpp
ggml-ci
2025-07-12 19:23:56 +03:00
Georgi Gerganov
6d64e4abf3 sync : ggml 2025-07-12 19:23:56 +03:00
Georgi Gerganov
85dcc74b88 sync : resolve conflicts (ggml/0)
ggml-ci
2025-07-12 19:23:56 +03:00
Jeff Bolz
915fc153a5 vulkan: support SET_ROWS (llama/14587)
* vulkan: support SET_ROWS

Add variants of the copy_to_quant shader that do the SET_ROWS operation.
Change these shaders to spread the work across the workgroup.
The memory access pattern is probably not great (one thread per quant block),
but should be fine for now.

* vulkan: optimize set_rows

Larger workgroups for non-quant types.
Set "norepeat" (there is manual repeat logic).
Use fastmod.
2025-07-12 19:23:56 +03:00
Jeff Bolz
8670a3fd5d vulkan: optimizations for deepseek prompt processing (llama/14555)
* vulkan: allow unclamped loads in coopmat2 mul_mat_id shader

* vulkan: increase coopmat2 mul_mat_id tile size

* vulkan: optimize mat_mul_id row_ids search to batch loads, and port to coopmat1 path

* vulkan: use smaller FA row size when head size is large. applies to both scalar and CM2 paths (CM1 isn't used due to shared memory limits)
2025-07-12 19:23:56 +03:00
Tarek Dakhran
74f6d47904 model : support LiquidAI LFM2 hybrid family (llama/14620)
**Important**
LFM2 was [merged ](https://github.com/huggingface/transformers/pull/39340)into transformers, but has not yet been released.
To convert into gguf, install transformers from source
```shell
pip install "transformers @ git+https://github.com/huggingface/transformers.git@main"
```
2025-07-12 19:23:56 +03:00
Slobodan Josic
a4ff4ec9cb HIP : Add HIP 7.0+ compatibility for hipBLAS compute types (llama/14634) 2025-07-12 19:23:56 +03:00
rmatif
b0754136be opencl: add tiled mul_mat_f16_f32 (llama/14535)
* add tiled mul_mat_f16_f32

* fix trailing whitespace

* add insightful comments
2025-07-12 19:23:56 +03:00
lhez
6f113cbcaa opencl: add set_rows for f16 and f32 (llama/14547)
* opencl: add `set_rows` for `f16` and `f32`

* opencl: better choose workgroup size for `set_rows`
2025-07-12 19:23:56 +03:00
Akarshan Biswas
3c21cde540 SYCL: Initial set_rows kernel implementation (llama/14562)
* SYCL: Initial set_rows kernel implementation

* Revert max_threads to 256

* Refactor set_rows and address review comments

* Deduplicate conversion function

* Remove guard before kernel launch and refactor

* Fix and add back SFINAE
2025-07-12 19:23:56 +03:00
compilade
fb885fa48b cuda : support Falcon-H1 state size for SSM_SCAN (llama/14602) 2025-07-12 19:23:56 +03:00
Xuan-Son Nguyen
2021870fb8 ggml : add ggml_scale_bias (llama/14417)
* ggml : add ggml_scale_bias

* ggml_vec_mad1_f32

* add more simd

* add CUDA

* sycl

* vulkan

* cann (placeholder)

* opencl

* will this fix cpu?

* fix cuda

* suggestions from coderabbit

* fix cann compile error

* vDSP_vsmsa

* rm __ARM_FEATURE_SVE

* use memcpy for op params

* make code looks more consistent

* use scalar for __ARM_FEATURE_SVE

* add x param to ggml_vec_mad1_f32
2025-07-12 19:23:56 +03:00
Miaoqian Lin
48b18f9eb8 ggml : prevent integer overflow in gguf tensor size calculation (llama/14595) 2025-07-12 19:23:56 +03:00
Jeff Bolz
fadb3233b6 vulkan: optimize flash attention split_k_reduce (llama/14554)
* vulkan: allow FA split_k with smaller KV values

* vulkan: spread split_k_reduce work across more threads

k_num can get rather large. Use the whole workgroup to reduce the M/L values.

Launch a thread for each element in the HSV dimension of the output. Helps a
lot for large HSV (like deepseek).
2025-07-12 19:23:56 +03:00
Jeff Bolz
9750e4c988 vulkan : fix rope with partial rotation and non-cont src (llama/14582) 2025-07-12 19:23:56 +03:00
Georgi Gerganov
c3942b3db6 cuda : fix rope with partial rotation and non-cont src (llama/14580)
* cuda : fix rope non-cont

ggml-ci

* cont : fix multi-rope + add test

ggml-ci

* sycl : try fix

ggml-ci

* cont : fix sycl + clean-up cuda

ggml-ci
2025-07-12 19:23:56 +03:00
Aman Gupta
98e7beac6c CUDA: add bilinear interpolation for upscale (llama/14563) 2025-07-12 19:23:56 +03:00
R0CKSTAR
7e9c6bbab2 musa: fix build warnings (unused variable) (llama/14561)
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-07-12 19:23:56 +03:00
Aman Gupta
8e545f466c CUDA: add bf16 and i32 to getrows (llama/14529) 2025-07-12 19:23:56 +03:00
Eve
e753b9a952 vulkan: increase LOAD_VEC_A to 8 (IQ1/IQ2) or 4 (IQ3) (llama/14485)
Commit taken from remyoudompheng's PR https://github.com/ggml-org/llama.cpp/pull/12260

Co-authored-by: Rémy Oudompheng <remyoudompheng@gmail.com>
2025-07-12 19:23:56 +03:00
Jeff Bolz
9d0c408260 vulkan: fix rms_norm+mul fusion (llama/14545)
The fused operation was grabbing the epsilon value from the wrong place.

Add an env var to disable fusion.

Add some missing checks for supported shapes/types.

Handle fused rms_norm+mul in check_results.
2025-07-12 19:23:56 +03:00
Jeff Bolz
3aebb8d5d3 vulkan: Handle updated FA dim2/3 definition (llama/14518)
* vulkan: Handle updated FA dim2/3 definition

Pack mask boolean and n_head_log2 into a single dword to keep the push
constant block under the 128B limit.

* handle null mask for gqa

* allow gqa with dim3>1
2025-07-12 19:23:56 +03:00
Sigbjørn Skjæret
df5af1dc75 opencl: add GELU_ERF (llama/14476) 2025-07-12 19:23:56 +03:00
Georgi Gerganov
10d0d28f7c metal : disable fast math in all quantize kernels (llama/14528)
ggml-ci
2025-07-12 19:23:56 +03:00
luyhcsu
af304ef080 CANN: Replace aclrtMemsetSync with aclnnInplaceZero operator (llama/14002)
Co-authored-by: luyuhong <luyuhong@kylinos.cn>
2025-07-12 19:23:56 +03:00
Sigbjørn Skjæret
e8138c51d2 ggml : implement GEGLU_ERF and GEGLU_QUICK ops (llama/14445) 2025-07-12 19:23:56 +03:00
lhez
7cec4cc83a opencl : broadcast for soft_max (llama/14510) 2025-07-12 19:23:56 +03:00
Jeff Bolz
a432929d58 vulkan: support mixed/deepseekR1 FA head sizes (llama/14509)
* vulkan: better parameterize FA by head sizes

* vulkan: support mixed/deepseekR1 FA head sizes
2025-07-12 19:23:56 +03:00
Johannes Gäßler
4aaf8114e7 ggml: backward pass for split swiglu (llama/14483) 2025-07-12 19:23:56 +03:00
Nicolò Scipione
0ca760433c Fix conditional enabling following arch checks for ggml-sycl (llama/14504)
Signed-off-by: nscipione <nicolo.scipione@codeplay.com>
2025-07-12 19:23:56 +03:00
Georgi Gerganov
ed639c7f22 kv-cache : use ggml_set_rows (llama/14285)
* kv-cache : use ggml_set_rows

ggml-ci

* graph : separate k and v indices

ggml-ci

* cont : remove redundant ifs

ggml-ci

* kv-cache : improve find_slot impl

* kv-cache : bounds-check when accessing slot_info indices

* kv-cache : add comments

ggml-ci

* ggml : add TODOs for adding GGML_OP_SET_ROWS support in the backends

ggml-ci
2025-07-12 19:23:56 +03:00
Georgi Gerganov
0abd0660e1 ggml : fix FA mask dim 2 and 3 (llama/14505)
* ggml : fix FA mask dim 2 and 3

ggml-ci

* backends : unsupport batched FA in CUDA and Vulkan

ggml-ci

* vulkan : disable FA for mask->ne[2] != 1
2025-07-12 19:23:56 +03:00
Aman Gupta
9cde908c0a CUDA: add dynamic shared mem to softmax, refactor general usage (llama/14497) 2025-07-12 19:23:56 +03:00
compilade
d2d120c256 llama : initial Mamba-2 support (llama/9126)
* llama : initial Mamba-2 support

* ggml : SIMD ggml_ssm_scan for Mamba-2

* ggml : improve ggml_mul speed when masking recurrent states

* llama : support running Mamba-Codestral-7B-v0.1

* llama : fix Mamba-2 conv state saving

* ggml : make the ggml_mul fast broadcast path more consistently formatted

* llama : remove unused variable

* llama : add missing break

* convert_hf : prefer SentencePiece tokenizer for Mamba-2 when present

The tokenzier.json of Mamba-Codestral-7B-v0.1 otherwise requires
workarounds to work correctly.

* llama : avoid redundant state copy for Mamba 1 and 2

* metal : attempt to adapt SSM_SCAN for Mamba-2

* metal : fix SSM_SCAN pipeline scope

* metal : use log and exp instead of log1pf and expf in SSM_SCAN

* metal : remove unused arguments for SSM_SCAN

The max index is 31, so trimming the arguments is necessary.

* metal : add back n_seqs to SSM_SCAN args

Whoops, this is needed for the offset in the concatenated output.

* metal : fix SSM_SCAN state head offset

* metal : fix wrong number of tokens per sequence in SSM_SCAN

* ggml : remove unused fast broadcast path in GGML_MUL

This was initially added because states were masked with ggml_mul,
but this is no longer done and so this "optimisation" is no longer
necessary, or at least not worth the additional code complexity.

* ggml : avoid multiply by D in GGML_OP_SSM_SCAN

This makes the weight buft detection in src/llama.cpp simpler.

* convert : transpose Mamba-2 A, D and reshape SSM_NORM

This breaks existing conversions of Mamba-2 models
to avoid some reshapes.

Not sure if it's a good idea,
but it makes the graph slightly cleaner.

* llama : more appropriate SSM_SCAN and SSM_CONV buft support checks

* convert : fix flake8 lint

* metal : fix confusion between ; and ,

* metal : add missing args for nb references in ssm_scan_f32_group

* metal : single-user mamba2 inference works

* kv-cache : remove const_cast when setting inputs for s_copy

And also fix multi-user inference for recurrent models
by using cell_id instead of i as the kv cell index
when populating s_copy.

* convert : avoid AutoConfig for Mamba and Mamba2 hparams

* kv-cache : allow context shift for recurrent models

* graph : fix recurrent state copies when avoiding copies

Works, but using lambda functions might not be that clean.

* ggml : fix mamba2 ssm scan when compiled with SVE

* ggml-cpu : reorder SVE FMA for consistency with other SIMD arches

* cuda : implement ssm scan for Mamba2

There is still room for improvement, but it works!

* cuda : adapt Mamba1 ssm scan to shape changes from Mamba2

* mamba : fix mismatched new and delete size for llm_build_mamba

Subclasses of llm_graph_context cannot have extra fields,
because the called destructor is not the one from the subclass.
This otherwise would cause problems when runnning Mamba-(1|2) inference
when compiled -DGGML_SANITIZE_ADDRESS=ON

* cuda : graceful fallback for Mamba-1 models with weird embd size
2025-07-12 19:23:56 +03:00
Aman Gupta
fb5c4095ee CUDA: add softmax broadcast (llama/14475)
* CUDA: add softmax broadcast

* Pass by const ref

* Review: Use blockDims for indexing, remove designated initializers

* Add TODO for noncontigous input/output
2025-07-12 19:23:56 +03:00
Johannes Gäßler
70515ed728 CUDA: broadcasting for FlashAttention mask (llama/14500) 2025-07-12 19:23:56 +03:00