Fujimoto Seiji 448f3d3b93
tests : add script to benchmark whisper.cpp on LibriSpeech corpus (#2999)
* tests : add script to benchmark whisper.cpp on LibriSpeech corpus

LibriSpeech is a widely-used benchmark dataset for training and
testing speech recognition models.

This adds a set of scripts to measure the recognition accuracy of
whisper.cpp models, following the common benchmark standards.

Signed-off-by: Fujimoto Seiji <fujimoto@ceptord.net>

* Document how to prepare `whisper-cli` and model files

Feedback from Daniel Bevenius.

This adds a short code example how to prepare the `whisper-cli`
command, to make the initial setup step a little bit clearer.

Signed-off-by: Fujimoto Seiji <fujimoto@ceptord.net>

* tests : Simplify how to set up Python environment

Based on a feedback from Georgi Gerganov.

Instead of setting up a virtual environment in Makefile, let users
set up the Python environment. This is better since users may have
their own preferred workflow/toolkit.

Signed-off-by: Fujimoto Seiji <fujimoto@ceptord.net>

---------

Signed-off-by: Fujimoto Seiji <fujimoto@ceptord.net>
2025-04-04 19:51:26 +03:00
..

whisper.cpp/tests/librispeech

LibriSpeech is a standard dataset for training and evaluating automatic speech recognition systems.

This directory contains a set of tools to evaluate the recognition performance of whisper.cpp on LibriSpeech corpus.

Quick Start

  1. (Pre-requirement) Compile whisper-cli and prepare the Whisper model in ggml format.

    $ # Execute the commands below in the project root dir.
    $ cmake -B build
    $ cmake --build build --config Release
    $ ./models/download-ggml-model.sh tiny
    

    Consult whisper.cpp/README.md for more details.

  2. Download the audio files from LibriSpeech project.

    $ make get-audio
    
  3. Set up the environment to compute WER score.

    $ pip install -r requirements.txt
    

    For example, if you use virtualenv, you can set up it as follows:

    $ python3 -m venv venv
    $ . venv/bin/activate
    $ pip install -r requirements.txt
    
  4. Run the benchmark test.

    $ make
    

How-to guides

How to change the inferece parameters

Create eval.conf and override variables.

WHISPER_MODEL = large-v3-turbo
WHISPER_FLAGS = --no-prints --threads 8 --language en --output-txt

Check out eval.mk for more details.