* whisper : check state->ctx_metal not null
* whisper : add whisper_context_params { use_gpu }
* whisper : new API with params & deprecate old API
* examples : use no-gpu param && whisper_init_from_file_with_params
* whisper.objc : enable metal & disable on simulator
* whisper.swiftui, metal : enable metal & support load default.metallib
* whisper.android : use new API
* bindings : use new API
* addon.node : fix build & test
* bindings : updata java binding
* bindings : add missing whisper_context_default_params_by_ref WHISPER_API for java
* metal : use SWIFTPM_MODULE_BUNDLE for GGML_SWIFT and reuse library load
* metal : move bundle var into block
* metal : use SWIFT_PACKAGE instead of GGML_SWIFT
* style : minor updates
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* sync : ggml (backend v2, k-quants, CUDA opts, Metal opts, etc.)
* metal : allow env metal variable to override resource path (#1415)
* Allow env variable to override resource path
* Update ggml-metal.m
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* sync : restore common / main from `master`
* sync : restore whisper from `master`
* talk-llama : update to latest llama.cpp
* ruby : fix build
* ggml : fix 32-bit ARM build
* ggml : fix MIN / MAX macro collisions + update ios bindings
* ggml : fix ifdefs and MIN / MAX again
* exampels : fix Obj-C and Swift examples
* ggml : fix 32-bit ARM compatibility
* ggml : one more attempt to fix 32-bit ARM compat
* whisper : fix support for larger graphs
---------
Co-authored-by: Chris Raethke <codesoda@users.noreply.github.com>
* metal : init
* whisper : factor out graph builds
* whisper : allocate encoder and decoder using ggml-alloc
* whisper : ggml-alloc is now supported
* whisper : CoreML support ggml-alloc
* build : fix ggml-alloc
* ios : update submodule
* extra : update sync-ggml.sh script to also sync ggml-alloc
* ci : see if this is causing the crash
* whisper : refactor ggml-alloc init
* whisper.android : try to fix build
* whisper : initial Metal version
* ci : try to debug vmem issue
* metal : decoder works on GPU!
* metal : add multi-decoder support
* ggml : fix ggml_nbytes (probably temp solution)
* metal : run "cross" step on the GPU
* whisper : remove ggml_repeat in the encoder
* whisper : offload the Encoder to Metal
* ggml : use simpler ggml_bytes() implementation
* ggml-alloc : try to make CI happy by reducing vram to 128GB
* whisper : add whisper_allocr to wrap ggml_allocr
* whisper : factor out alloc init in a function
* cmake : update to support Metal build
* whisper : add <functional> header
* objc : fix build (no Metal yet)
* ios : add Metal support
* swiftui : fix build
* metal : speed-up KQ multiplication
* metal : sync latest llama.cpp kernels
* readme : add Metal info
* ios : update submodule
* coreml : add code to toggle Core ML config (CPU, ANE, GPU)
* bench : fix timings by running a pre-heat
* bench : start benching the decoder
* whisper : add ggml_mul_mat_pad
* bench : fix uninitialized vars
* whisper : add comment for disabling mul-mat padding
* whisper : add description of ggml_mul_mat_pad
* whisper : clean-up ggml_mul_mat_pad
* metal : remove the "concurrent" flag
* bench : variable n_past
* ios : update SPM package
* Do not use _GNU_SOURCE gratuitously.
What is needed to build whisper.cpp and examples is availability of
stuff defined in The Open Group Base Specifications Issue 6
(https://pubs.opengroup.org/onlinepubs/009695399/) known also as
Single Unix Specification v3 (SUSv3) or POSIX.1-2001 + XSI extensions,
plus some stuff from BSD that is not specified in POSIX.1.
Well, that was true until NUMA support was added recently in ggml,
so enable GNU libc extensions for Linux builds to cover that.
There is no need to penalize musl libc which simply follows standards.
Not having feature test macros in source code gives greater flexibility
to those wanting to reuse it in 3rd party app, as they can build it with
minimal FTM (_XOPEN_SOURCE=600) or other FTM depending on their needs.
It builds without issues in Alpine (musl libc), Ubuntu (glibc), MSYS2.
* examples : include SDL headers before other headers
Avoid macOS build error when _DARWIN_C_SOURCE is not defined, brought by
SDL2 relying on Darwin extension memset_pattern4/8/16 (from string.h).
* make : enable BSD extensions for DragonFlyBSD to expose RLIMIT_MEMLOCK
* make : use BSD-specific FTMs to enable alloca on BSDs
* make : fix OpenBSD build by exposing newer POSIX definitions
* cmake : follow recent FTM improvements from Makefile
* talk-llama : use posix_madvise() instead of madvise() derived from BSD
sed -i 's,\<madvise\>,posix_&,g;s,\<MADV_,POSIX_&,g' examples/talk-llama/llama-util.h
* make : enable Darwin extensions for macOS builds
This is an attempt at fixing macOS build error coming from the fact that
RLIMIT_MEMLOCK define is not available there without Darwin extensions.
* Do not use _GNU_SOURCE gratuitously.
What is needed to build whisper.cpp and examples is availability of
stuff defined in The Open Group Base Specifications Issue 6
(https://pubs.opengroup.org/onlinepubs/009695399/) known also as
Single Unix Specification v3 (SUSv3) or POSIX.1-2001 + XSI extensions.
There is no need to penalize musl libc which simply follows standards.
Not having feature test macros in source code gives greater flexibility
to those wanting to reuse it in 3rd party app, as they can build it with
minimal FTM (_XOPEN_SOURCE=600) or other FTM depending on their needs.
It builds without issues in Alpine (musl libc), Ubuntu (glibc), MSYS2.
* examples : include SDL headers before other headers
This is an attempt at fixing macOS build error coming from SDL2 relying
on Darwin extension memset_pattern4/8/16 coming from Apple's string.h.
* feat: adding session support
* readme: adding --session info in examples/talk-llama
* llama: adding session fixes
* readme: updating session doc
* talk-llama: update the value of need_to_save_session to true in order to save the session in the subsequent interaction
* talk-llama: adding missing function which updates session_tokens
There is `speak.sh` file in `./examples/talk-llama` as described in README.
However `./examples/talk/speak.sh` is used in `talk-llama.cpp`, this commit corrects that.