1
0
mirror of https://github.com/ggerganov/whisper.cpp.git synced 2025-06-30 14:30:15 +02:00
Files
.devops
.github
bindings
cmake
coreml
examples
addon.node
bench
bench.wasm
command
command.wasm
lsp
main
python
quantize
server
stream
stream.wasm
sycl
CMakeLists.txt
README.md
build.sh
ls-sycl-device.cpp
run-whisper.sh
talk
talk-llama
talk.wasm
wchess
whisper.android
whisper.android.java
whisper.nvim
whisper.objc
whisper.swiftui
whisper.wasm
CMakeLists.txt
common-ggml.cpp
common-ggml.h
common-sdl.cpp
common-sdl.h
common.cpp
common.h
dr_wav.h
generate-karaoke.sh
grammar-parser.cpp
grammar-parser.h
helpers.js
json.hpp
livestream.sh
twitch.sh
yt-wsp.sh
extra
grammars
models
openvino
samples
spm-headers
tests
.gitignore
.gitmodules
CMakeLists.txt
LICENSE
Makefile
Package.swift
README.md
README_sycl.md
ggml-alloc.c
ggml-alloc.h
ggml-backend-impl.h
ggml-backend.c
ggml-backend.h
ggml-cuda.cu
ggml-cuda.h
ggml-impl.h
ggml-kompute.cpp
ggml-kompute.h
ggml-metal.h
ggml-metal.m
ggml-metal.metal
ggml-opencl.cpp
ggml-opencl.h
ggml-quants.c
ggml-quants.h
ggml-sycl.cpp
ggml-sycl.h
ggml-vulkan.cpp
ggml-vulkan.h
ggml.c
ggml.h
whisper.cpp
whisper.h
whisper.cpp/examples/sycl
Abhilash Majumder a0ddd8392c whisper : add SYCL support ()
* add changes from llama upstream

* add sycl abstraction

* add sycl build

* update cmake

* add sycl build config

* fix bug

* fix bug

* refactor build

* fix bug

* update build

* call build

* use sycl header

* add examples

* add target

* fix typecast in quant.c

* readd fp16 and readme

* fix quant typecast

* add sample

* add readme

* remove cxx file check
2024-02-23 09:22:24 +02:00
..
2024-02-23 09:22:24 +02:00
2024-02-23 09:22:24 +02:00

llama.cpp/example/sycl

This example program provide the tools for llama.cpp for SYCL on Intel GPU.

Tool

Tool Name Function Status
ls-sycl-device List all SYCL devices with ID, compute capability, max work group size, ect. Support

ls-sycl-device

List all SYCL devices with ID, compute capability, max work group size, ect.

  1. Build the llama.cpp for SYCL for all targets.

  2. Enable oneAPI running environment

source /opt/intel/oneapi/setvars.sh
  1. Execute
./build/bin/ls-sycl-device

Check the ID in startup log, like:

found 4 SYCL devices:
  Device 0: Intel(R) Arc(TM) A770 Graphics,	compute capability 1.3,
    max compute_units 512,	max work group size 1024,	max sub group size 32,	global mem size 16225243136
  Device 1: Intel(R) FPGA Emulation Device,	compute capability 1.2,
    max compute_units 24,	max work group size 67108864,	max sub group size 64,	global mem size 67065057280
  Device 2: 13th Gen Intel(R) Core(TM) i7-13700K,	compute capability 3.0,
    max compute_units 24,	max work group size 8192,	max sub group size 64,	global mem size 67065057280
  Device 3: Intel(R) Arc(TM) A770 Graphics,	compute capability 3.0,
    max compute_units 512,	max work group size 1024,	max sub group size 32,	global mem size 16225243136

Attribute Note
compute capability 1.3 Level-zero running time, recommended
compute capability 3.0 OpenCL running time, slower than level-zero in most cases