Commit Graph

11 Commits

Author SHA1 Message Date
Eve
26dd2f06ac
make portability_enumeration_ext apple only (llama/5757) 2024-03-08 11:38:31 +02:00
Georgi Gerganov
fac5b43830
code : normalize enum names (llama/5697)
* coda : normalize enum names

ggml-ci

* code : cont

* code : cont
2024-02-25 19:58:46 +02:00
UEXTM.com
1cb64f7368
Introduce backend GUIDs (ggml/743)
* Introduce backend GUIDs

Initial proposed implementation of backend GUIDs
(Discussed in https://github.com/ggerganov/ggml/pull/741)

Hardcoded CPU backend GUID (for now)
Change ggml_backend_is_cpu logic to use GUID

* Remove redundant functions

Remove redundant functions `ggml_backend_i::get_name` and `ggml_backend_guid` which are not desired for future expansion

* Add spaces to match style

Co-authored-by: slaren <slarengh@gmail.com>

* Fix brace style to match

Co-authored-by: slaren <slarengh@gmail.com>

* Add void to () in function signature

Co-authored-by: slaren <slarengh@gmail.com>

* Add back ggml_backend_guid and make CPU_GUID a local static in ggml_backend_cpu_guid

* add guids to all backends

ggml-ci

---------

Co-authored-by: slaren <slarengh@gmail.com>
2024-02-25 19:58:45 +02:00
0cc4m
8daa534818
Refactor validation and enumeration platform checks into functions to clean up ggml_vk_instance_init() 2024-02-22 15:12:36 +02:00
0cc4m
9fca69b410
Add check for VK_KHR_portability_enumeration for MoltenVK support 2024-02-22 15:12:36 +02:00
Mathijs de Bruin
b26c645420
Add preprocessor checks for Apple devices.
Based on work by @rbourgeat in https://github.com/ggerganov/llama.cpp/pull/5322/files
2024-02-22 15:12:36 +02:00
Mathijs de Bruin
1879ec556e
Resolve ErrorIncompatibleDriver with Vulkan on MacOS.
Refs:
- https://chat.openai.com/share/7020ce72-65fc-45ec-b7be-9d9d798a5f3f
- https://github.com/SaschaWillems/Vulkan/issues/954
- https://github.com/haasn/libplacebo/issues/128
- https://github.com/KhronosGroup/Vulkan-Samples/issues/476
2024-02-22 15:12:35 +02:00
Georgi Gerganov
74a6acc999
cmake : fix VULKAN and ROCm builds (llama/5525)
* cmake : fix VULKAN and ROCm builds

* cmake : fix (cont)

* vulkan : fix compile warnings

ggml-ci

* cmake : fix

ggml-ci

* cmake : minor

ggml-ci
2024-02-19 15:53:23 +02:00
Neuman Vong
a38efcb9fd
vulkan: Find optimal memory type but with fallback (llama/5381)
* @0cc4m feedback

* More feedback @0cc4m
2024-02-19 15:53:22 +02:00
Sergio López
04839bae22
vulkan: only use M-sized matmul on Apple GPUs (llama/5412)
* vulkan: refactor guess_matmul_pipeline for vendor

Refactor ggml_vk_guess_matmul_pipeline to simplify adding per-vendor
conditionals.

Signed-off-by: Sergio Lopez <slp@redhat.com>

* vulkan: only use M-sized matmul on Apple GPUs

L-sized and S-sized matmuls are broken on Apple GPUs, force using
M-size with this vendor.

Signed-off-by: Sergio Lopez <slp@redhat.com>

---------

Signed-off-by: Sergio Lopez <slp@redhat.com>
2024-02-12 09:31:12 +02:00
Georgi Gerganov
8b17a2f776
src : relocate new backend sources 2024-02-10 09:55:47 +02:00