I've been using the new Criterion benchmarks and I noticed that they
take a _long_ time to build before the benchmark can run. Turns out
`cargo build` was building 3 separate benchmarking binaries with most of
Nu's functionality in each one.
As a simple temporary fix, I've moved all the benchmarks into a single
file so that we only build 1 binary.
### Future work
Would be nice to split the unrelated benchmarks out into modules, but
when I did that a separate binary still got built for each one. I
suspect Criterion's macros are doing something funny with module or file
names. I've left a FIXME in the code to investigate this further.
A quick follow-up to https://github.com/nushell/nushell/pull/7686. This
adds benchmarks for evaluating `default_env.nu` and `default_config.nu`,
because evaluating config takes up the lion's share of Nushell's startup
time. The benchmarks will help us speed up Nu's startup and test
execution.
```
eval default_env.nu time: [4.2417 ms 4.2596 ms 4.2780 ms]
...
eval default_config.nu time: [1.9362 ms 1.9439 ms 1.9523 ms]
```
This PR sets up [Criterion](https://github.com/bheisler/criterion.rs)
for benchmarking in the main `nu` crate, and adds some simple parser
benchmarks.
To run the benchmarks, just do `cargo bench` or `cargo bench -- <regex
matching benchmark names>` in the repo root:
```bash
〉cargo bench -- parse
...
Running benches/parser_benchmark.rs (target/release/deps/parser_benchmark-75d224bac82d5b0b)
parse_default_env_file time: [221.17 µs 222.34 µs 223.61 µs]
Found 8 outliers among 100 measurements (8.00%)
5 (5.00%) high mild
3 (3.00%) high severe
parse_default_config_file
time: [1.4935 ms 1.4993 ms 1.5059 ms]
Found 11 outliers among 100 measurements (11.00%)
7 (7.00%) high mild
4 (4.00%) high severe
```
Existing benchmarks from `nu-plugin` have been moved into the main `nu`
crate to keep all our benchmarks in one place.