Default Branch

0083335ba0 · coreml : backport CoreML features to macos < 14 (#3255) · Updated 2025-06-24 09:24:27 +02:00

Branches

bff8dc248a · talk-llama : sync llama.cpp · Updated 2025-05-13 12:20:19 +02:00

235
21

0055356fbc · cli : avoid std::exchange · Updated 2025-05-07 12:23:06 +02:00

274
10

10acc21fa3 · make : fix samples glob pattern · Updated 2025-04-30 13:20:50 +02:00

302
1

becd0c888e · whisper : reduce delta_min from 1000ms to 100ms · Updated 2025-04-10 11:25:29 +02:00

403
1

e400aeb770 · examples : add new sources · Updated 2025-04-02 14:52:29 +02:00

421
3

05ce7476ae · ggml-ci: update input env variables to GG_BUILD_ · Updated 2025-03-14 09:14:44 +01:00

562
1

00ddb10fe2 · select utf8 codepage on windows · Updated 2025-02-19 10:00:39 +01:00

673
2

b0aeef2d52 · ci : fix windows builds to use 2019 · Updated 2024-11-21 13:28:14 +01:00

940
1

b67bdc9430 · disable · Updated 2024-11-20 22:18:58 +01:00

940
4

511579cc15 · ci : use local ggml · Updated 2024-11-16 19:31:57 +01:00

984
1

552419f2c0 · ggml : aligned malloc -> malloc · Updated 2024-10-31 20:40:11 +01:00

1084
3

ceb77363cd · ggml : disable CUDA graphs for non-llama.cpp projects · Updated 2024-06-26 19:14:22 +02:00

1381
1

267e15a46d · cuda : avoid async allocs in CUDA mel code · Updated 2024-06-12 08:52:15 +02:00

1498
1

5801b8ac64 · cuda : fix HIPBLAS build · Updated 2024-06-11 18:13:43 +02:00

1499
1

13c5446759 · Update ggml-cuda/mmvq.cu · Updated 2024-06-11 16:37:32 +02:00

1501
2

059bcd3009 · ci : fix CUDA builds · Updated 2024-06-11 10:40:19 +02:00

1501
1

ba69578828 · whisper : add whisper_token_count helper · Updated 2024-03-25 13:46:07 +01:00

1645
2

66df44b0b7 · alloc : fix allocation data of pre-allocated leafs · Updated 2024-03-16 15:47:14 +01:00

1654
2

f25edade2b · whisper : alternative way to handle the external encoders · Updated 2024-02-12 15:32:26 +01:00

1781
2

15c4fdce45 · chess : tuning performance · Updated 2023-11-30 09:50:47 +01:00

2011
21