This commit adds support for MFMA instructions to MMQ. CDNA1/GFX908 CDNA2/GFX90a and CDNA3/GFX942 are supported by the MFMA-enabled code path added by this commit. The code path and stream-k is only enabled on CDNA3 for now as it fails to outperform blas in all cases on the other devices.
Blas is currently only consistently outperformed on CDNA3 due to issues in the amd-provided blas libraries.
This commit also improves the awareness of MMQ towards different warp sizes and as a side effect improves the performance of all quant formats besides q4_0 and q4_1, which regress slightly, on GCN gpus.
* feat: Add s_off as a parameter in the args struct
This may not be necessary, but it more closely mirrors the CUDA kernel
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* perf: Parallelize mamba2 SSM_SCAN metal kernel over d_state
This is a first attempt at optimizing the metal kernel. The changes here
are:
- Launch the kernel with a thread group of size d_state
- Use simd groups and shared memory to do the summation for the y
computation
When tested with G4 tiny preview, this shows roughly a 3x speedup on
prefill and 15% speedup on decode.
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix: Update logic to correctly do the multi-layer parallel sum
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix: Correctly size the shared memory bufer and assert expected size relationships
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* refactor: Compute block offsets once rather than once per token
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat: Use local variable for state recursion
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat: Use a secondary simd_sum instead of a for loop
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat: Add assertion and comment about relationship between simd size and num simd groups
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat: Parallelize of d_state for mamba-1
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat: Parallel sum in SSM_CONV
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* Revert "feat: Parallel sum in SSM_CONV"
After discussion with @compilade, the size of the parallelism here is
not worth the cost in complexity or overhead of the parallel for.
https://github.com/ggml-org/llama.cpp/pull/14743#discussion_r2223395357
This reverts commit 16bc059660c1c59e566628201c0ca2c20c9f4bc3.
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* refactor: Simplify shared memory sizing
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
Co-Authored-By: Georgi Gerganov <ggerganov@gmail.com>
---------
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Neither "g" nor "x" are valid portPos specifiers per the official
[graphviz documents](https://graphviz.org/docs/attr-types/portPos/):
> If a compass point is used, it must have the form "n","ne","e","se","s","sw","w","nw","c","_".
I tested locally for it to fall back to default portPos specifier if an
invalid portPos is specified. As a consequence, we can remove associated
code.
* weight format to nz for 310p
* remove quant weight format to nz
* clean code
* fix
* make the conditions for converting weights to NZ format consistent
* clean code
* ggml/ggml-vulkan/test-backend-ops: adds CONV_2D for Vulkan
* ggml-vulkan: adds f32 scalar shader to compute 2D convolution directly
with gemm (no need for im2col),
* test-backend-ops: adds test_case_ref to check the validity/performance of ops
against reference implementations having different graphs, adds tests
* * Performance fixes: minimized branch divergence, uses collectives to
eliminate redundant calculation, macros removed.
* Kernel shared memory size check
* Updates test-backend-ops to support graphs for performance
measurement.
* * Apple/Win32 compile errors fixed
* Subgroup size used to determine tile size -> fixes llvmpipe errors.
* Collectives disabled by default.
* Intel support is disabled as the performance is poor.
* Conv2d enabled for Intel with disabled collectives, disabled for Apple
* test-backend-ops modifications are reverted
* Trailing spaces and missing override fixed.
* Triggering pipeline relaunch.
* Code formatted with .clang-format.
* CMake config: Create target only once
Fix error on repeated find_package(ggml).
For simplicity, check only for the top-level ggml::ggml.
* CMake config: Add CUDA link libs
* CMake config: Add OpenCL link libs
* CMake config: Use canonical find_dependency
Use set and append to control link lib variables.
Apply more $<LINK_ONLY...>.
* CMake config: Wire OpenMP dependency
This commit removes the inclusion of `<cstdlib>`.
The motivation for this change is that this source file does not seem to
use any functions from this header and the comment about `qsort` is a
little misleading/confusing.
* This commit allows for the building of a static xcframework by adding a
BUILD_STATIC_XCFRAMEWORK option. When enabled, the build-xcframework.sh
script builds a self-contained static whisper.xcframework.
The motivation for this change is so that command line binaries can link
whisper.cpp without forcing users to install the whisper.xcframework
separately.
* Update build-xcframework.sh
Co-authored-by: Daniel Bevenius <daniel.bevenius@gmail.com>
* Address reviewer feedback: remove extra indentation around static xcframework creation.
* squash! Address reviewer feedback: remove extra indentation around static xcframework creation.
Fix whitespaces.
---------
Co-authored-by: Daniel Bevenius <daniel.bevenius@gmail.com>
This commit adds a note to the README files of the WASM examples
about the `WHISPER_WASM_SINGLE_FILE` option.
The motivation for this is that currently this option is not documented
and might be surprising to users who expect a separate .wasm file to be
generated.
Refs: https://github.com/ggml-org/whisper.cpp/issues/3290
This commit adds specific paths to the GitHub Actions workflow file
`.github/workflows/build.yml`.
The motivation for this to avoid unnecessary builds when unrelated files
are changed, which can save resources and time during the CI process.
Refs: https://github.com/ggml-org/whisper.cpp/issues/3285
This commit fixes Go bindings build failure for Mac OS X (15.1) which is currently failing.
Co-authored-by: Chaitanya Bayapuneni <bvk@mini.cinnamon-interval.ts.net>
* Fix Gemma3n not executed as CUDA_GRAPH on NVGPUs
Gemma3n uses Matrix-Matrix addition as part of their input processing,
wrongly triggering CUDA_GRAPH disablement on NVGPUs even when batch-size
of 1 is used.
* Exclude `project_per_layer_input` by matching node names
This ensures that all other graphs which don't exhibit this pattern do
not have their behavior changed.
* Revert unnecessary formatting changes