Compare commits

...

136 Commits

Author SHA1 Message Date
682d593d3f Bump patch version (#16236)
Bump patch version
2025-07-29 22:19:36 +02:00
c4fed5ea84 Don't import IoError on nu-plugin-core without local-socket (#16279)
When you use `nu-plugin-core` without the `local-socket` feature, you
got the warning:
<img width="1182" height="353" alt="image"
src="https://github.com/user-attachments/assets/dd80af11-4963-4d48-8c93-43e6c2dabafa"
/>

This PR fixes that warning.
2025-07-29 22:19:13 +02:00
27865b9415 Port unsafe_op_in_unsafe_fn fix to FreeBSD (#16275)
Same general idea as https://github.com/nushell/nushell/pull/16266

Fixes the 2024 edition
[`unsafe_op_in_unsafe_fn`](https://doc.rust-lang.org/nightly/edition-guide/rust-2024/unsafe-op-in-unsafe-fn.html)
lint for FreeBSD as well

Add safety comments to both implementations and an assertion before
`MaybeUninit::assume_init`
2025-07-29 22:19:13 +02:00
pin
400da3c698 Fix #16261 (#16266)
- this PR should close #16261 
- fixes #16261

Confirmed to build with Rust-1.86 and to yield a working binary.
2025-07-29 22:19:13 +02:00
3729fcf667 fix(get): run_const uses --optional flag (#16268)
`Get::run_const()` was not update along `Get::run()` in #16007.

Co-authored-by: Bahex <17417311+Bahex@users.noreply.github.com>
2025-07-29 22:19:13 +02:00
f390ba1aae fix bare interpolation regression (#16235)
Regression from #16204 

Before:

![](f1995bc71f/before.svg)

After:

![](f1995bc71f/after.svg)

# Tests + Formatting
+1

---------

Co-authored-by: Bahex <17417311+Bahex@users.noreply.github.com>
2025-07-29 22:19:13 +02:00
1b01625e1e Bump to 0.106.0 (#16222)
Bump to 0.106.0
2025-07-23 22:54:21 +08:00
51265b262d fix: highlighting a where command with invalid arguments can duplicate text (#16192)
- fixes #16129

# Description

## Problem
Parsing a multi span value as RowCondition always produces a RowCondition
expression that covers all the spans.

Which is problematic inside OneOf and can produce multiple expressions with
overlapping spans.
Which results in funky highlighting that duplicates text.

## Solution
Only reason for including `SyntaxShape::Closure` in the signature (#15697) was
for documentation purposes, making it clear in `help` texts that a closure can
be used as the argument.

As our current parser is shape directed, simplifying the command signature
means simplifies the parsing, so using a RowCondition on its own, and instead
making it always look like a union with `closure(any)` solves the issue without
any changes in the parser.

Also, RowCondition always accepts closure values anyway, so its textual
representation should indicate that without the need to wrap it in OneOf.

Co-authored-by: Bahex <17417311+Bahex@users.noreply.github.com>
2025-07-22 17:24:21 +03:00
889fe7f97b Add KillLine to the EditCommand parser (#16221)
# Description
Follow-up to https://github.com/nushell/reedline/pull/901
Closes https://github.com/nushell/reedline/issues/935

# User-Facing Changes
You can now use `{edit: KillLine}` to mimic emacs `Ctrl-K` behavior`
2025-07-22 14:12:24 +02:00
93b407fa88 Fix the nu-path fuzz-target (#16188)
This code didn't compile, fixed provisionally by always running the
codepath with tilde expansion.

@IanManske worth discussing what we may want to fuzz here.
2025-07-22 14:10:25 +02:00
b0687606f7 group by to table empty (#16219)
- fixes #16217

# Description

> `group-by --to-table` does not always return a table (a list)
> ```nushell
> [] | group-by --to-table something
> #=> ╭──────────────╮
> #=> │ empty record │
> #=> ╰──────────────╯
> ```

Fixed:
```nushell
[] | group-by --to-table something
#=> ╭────────────╮
#=> │ empty list │
#=> ╰────────────╯
```

# Tests + Formatting
+1 test

---------

Co-authored-by: Bahex <17417311+Bahex@users.noreply.github.com>
2025-07-22 14:07:55 +02:00
324aaef0af Bump reedline to 0.41.0 (#16220) 2025-07-22 13:57:19 +02:00
30c38b8c49 fix(parser): repeated ( / parenthesis / opened sub-expressions causes memory leak (#16204)
- fixes #16186

# Description

Don't attempt further parsing when there is no complete sub-expression.

---------

Co-authored-by: Bahex <17417311+Bahex@users.noreply.github.com>
2025-07-21 08:14:35 +03:00
16167a25ec Fix typo in glob example description ("files for folders" → "files or folders") (#16206)
This pull request fixes a typo in the description of a `glob` example:

**Before:**
```
  Search for files for folders that do not begin with c, C, b, M, or s
  > glob "[!cCbMs]*"
```
**After:**
```
  Search for files or folders that do not begin with c, C, b, M, or s
  > glob "[!cCbMs]*"
```
2025-07-18 18:49:50 -05:00
38009c714c feat(completion): enable nucleo's prefer_prefix option (#16183)
cc: @blindFS @ysthakur

# Description
Enable `nucleo`'s `prefer_prefix` configuration option.
Ranks suggestions with matches closer to the start higher than
suggestions that have matches further from the start.

Example: suggestions based on `reverse`:
<table>
<tr>
<td width=200>Before</td>
<td width=200>After</td>
</tr>
<tr>
<td>

```
bytes reverse
polars reverse
reverse
str reverse
```

</td>
<td>


```
reverse
str reverse
polars reverse
bytes reverse
```

</td>
</tr>
</table>

# User-Facing Changes
More relevant suggestions with fuzzy matching algorithm.
(`$env.config.completions.algorithm = "fuzzy"`)

# Tests + Formatting
We might want to add tests to make sure this option keeps working in the
future.

Co-authored-by: Bahex <17417311+Bahex@users.noreply.github.com>
2025-07-17 22:16:55 -04:00
AUX
e263991448 deps: bump sysinfo to v0.36 to improve sys temp command (#16195)
# Description
This PR improves `sys temp` command with component names showing up in
the unit column. See related [`sysinfo`
commit](6a5520459e)
for details.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

An example of running `sys temp` on my Linux system with the old
version:

```sh
$ nu -c 'sys temp'
╭────┬──────────────────────┬───────┬───────┬──────────╮
│  # │         unit         │ temp  │ high  │ critical │
├────┼──────────────────────┼───────┼───────┼──────────┤
│  0 │ r8169_0_500:00 temp1 │ 47.50 │ 47.50 │        │
│  1 │ Sensor 2             │ 54.85 │ 54.85 │        │
│  2 │ Composite            │ 47.85 │ 51.85 │    87.85 │
│  3 │ Sensor 1             │ 47.85 │ 72.85 │        │
│  4 │ acpitz temp1         │ 16.80 │ 16.80 │        │
│  5 │ acpitz temp2         │ 16.80 │ 16.80 │        │
│  6 │ junction             │ 66.00 │ 66.00 │   110.00 │
│  7 │ mem                  │ 60.00 │ 60.00 │   100.00 │
│  8 │ edge                 │ 57.00 │ 57.00 │   100.00 │
│  9 │ gigabyte_wmi temp4   │ 47.00 │ 47.00 │        │
│ 10 │ gigabyte_wmi temp3   │ 62.00 │ 62.00 │        │
│ 11 │ gigabyte_wmi temp1   │ 40.00 │ 40.00 │        │
│ 12 │ gigabyte_wmi temp6   │ 51.00 │ 51.00 │        │
│ 13 │ gigabyte_wmi temp5   │ 45.00 │ 45.00 │        │
│ 14 │ gigabyte_wmi temp2   │ 40.00 │ 40.00 │        │
│ 15 │ Tccd2                │ 47.00 │ 47.00 │        │
│ 16 │ Tctl                 │ 62.00 │ 62.00 │        │
╰────┴──────────────────────┴───────┴───────┴──────────╯
```
After this PR you can see the name of the components/devices:
```
$ target/debug/nu -c 'sys temp'
╭────┬──────────────────────────────────────────┬───────┬───────┬──────────╮
│  # │                   unit                   │ temp  │ high  │ critical │
├────┼──────────────────────────────────────────┼───────┼───────┼──────────┤
│  0 │ r8169_0_500:00 temp1                     │ 48.50 │ 48.50 │        │
│  1 │ nvme Sensor 1 WD Blue SN5000 1TB         │ 72.85 │ 72.85 │        │
│  2 │ nvme Sensor 2 WD Blue SN5000 1TB         │ 46.85 │ 46.85 │        │
│  3 │ nvme Composite WD Blue SN5000 1TB        │ 51.85 │ 51.85 │    87.85 │
│  4 │ acpitz temp2                             │ 16.80 │ 16.80 │        │
│  5 │ acpitz temp1                             │ 16.80 │ 16.80 │        │
│  6 │ amdgpu edge                              │ 56.00 │ 56.00 │   100.00 │
│  7 │ amdgpu junction                          │ 66.00 │ 66.00 │   110.00 │
│  8 │ amdgpu mem                               │ 58.00 │ 58.00 │   100.00 │
│  9 │ gigabyte_wmi temp4                       │ 47.00 │ 47.00 │        │
│ 10 │ gigabyte_wmi temp6                       │ 51.00 │ 51.00 │        │
│ 11 │ gigabyte_wmi temp3                       │ 48.00 │ 48.00 │        │
│ 12 │ gigabyte_wmi temp5                       │ 44.00 │ 44.00 │        │
│ 13 │ gigabyte_wmi temp2                       │ 40.00 │ 40.00 │        │
│ 14 │ gigabyte_wmi temp1                       │ 40.00 │ 40.00 │        │
│ 15 │ k10temp Tccd2                            │ 48.50 │ 48.50 │        │
│ 16 │ k10temp Tctl                             │ 49.38 │ 49.38 │        │
│ 17 │ nvme Sensor 2 Samsung SSD 970 EVO 500GB  │ 54.85 │ 54.85 │        │
│ 18 │ nvme Sensor 1 Samsung SSD 970 EVO 500GB  │ 47.85 │ 47.85 │        │
│ 19 │ nvme Composite Samsung SSD 970 EVO 500GB │ 47.85 │ 47.85 │    84.85 │
╰────┴──────────────────────────────────────────┴───────┴───────┴──────────╯
```
2025-07-18 07:17:59 +08:00
0b7c246bf4 Check for interrupt before each IR instruction (#16134)
# Description
Fixes #15308. Surprisingly, I can't find any more issues that bring this
up.

This PR adds a check for interrupt before evaluating each IR
instruction. This makes it possible to ctrl-c out of things that were
difficult/impossible previously, like:
```nushell
loop {}
```

@devyn also [mentioned
previously](https://discord.com/channels/601130461678272522/615329862395101194/1268674828492083327)
that this might be a feasible option, but mentioned some performance
concerns.

I did some benchmarking previously but it turns out tango isn't the
greatest for this. I don't have hard numbers anymore but based on some
experimenting I shared in the Discord it seems like no experimental
option and limiting this to specific instructions should keep the
performance impact minimal.
<details>
<summary>Old benchmarking info</summary>

I did some benchmarks against main. It seems like most benchmarks are
between 1-6% slower, with some oddball exceptions.

The worst case seems to a giant loop, which makes sense since it's
mostly just jumping back to the beginning, meaning we hit the check over
and over again.

With `bench --pretty -n 100 { for i in 0..1000000 { 1 } }`, it seems
like the `ir-interrupt` branch is ~10% slower for this stress case:
```
main: 94ms 323µs 29ns +/- 1ms 291µs 759ns 
ir-interrupt: 103ms 54µs 708ns +/- 1ms 606µs 571ns

(103ms + 54µs + 708ns) / (94ms + 323µs + 29ns) = 1.0925720801438639
```

The performance numbers here aren't great, but they're not terrible
either, and I think the usability improvement is probably worth it here.

<details>
<summary>Tango benchmarks</summary>

```
load_standard_lib                                  [   2.8 ms ...   2.9 ms ]      +3.53%*
record_create_1                                    [ 101.8 us ... 107.0 us ]      +5.12%*
record_create_10                                   [ 132.3 us ... 135.6 us ]      +2.48%*
record_create_100                                  [ 349.7 us ... 354.6 us ]      +1.39%*
record_create_1000                                 [   3.7 ms ...   3.6 ms ]      -1.30%
record_flat_access_1                               [  94.8 us ...  99.9 us ]      +5.35%*
record_flat_access_10                              [  96.2 us ... 101.3 us ]      +5.33%*
record_flat_access_100                             [ 106.0 us ... 111.4 us ]      +5.07%*
record_flat_access_1000                            [ 203.0 us ... 208.5 us ]      +2.69%
record_nested_access_1                             [  95.3 us ... 100.1 us ]      +5.03%*
record_nested_access_2                             [  98.4 us ... 104.6 us ]      +6.26%*
record_nested_access_4                             [ 100.8 us ... 105.4 us ]      +4.56%*
record_nested_access_8                             [ 104.7 us ... 108.1 us ]      +3.23%
record_nested_access_16                            [ 110.6 us ... 115.8 us ]      +4.71%*
record_nested_access_32                            [ 119.0 us ... 124.0 us ]      +4.20%*
record_nested_access_64                            [ 137.4 us ... 142.2 us ]      +3.47%*
record_nested_access_128                           [ 176.7 us ... 181.8 us ]      +2.87%*
record_insert_1_1                                  [ 106.2 us ... 111.9 us ]      +5.35%*
record_insert_10_1                                 [ 111.2 us ... 116.0 us ]      +4.28%*
record_insert_100_1                                [ 134.3 us ... 139.8 us ]      +4.06%*
record_insert_1000_1                               [ 387.8 us ... 417.1 us ]      +7.55%
record_insert_1_10                                 [ 162.9 us ... 171.5 us ]      +5.23%*
record_insert_10_10                                [ 159.5 us ... 166.2 us ]      +4.23%*
record_insert_100_10                               [ 183.3 us ... 190.7 us ]      +4.04%*
record_insert_1000_10                              [ 411.7 us ... 418.7 us ]      +1.71%*
table_create_1                                     [ 117.2 us ... 120.3 us ]      +2.66%
table_create_10                                    [ 144.9 us ... 152.2 us ]      +5.02%*
table_create_100                                   [ 476.0 us ... 484.8 us ]      +1.86%*
table_create_1000                                  [   3.7 ms ...   3.7 ms ]      +1.55%
table_get_1                                        [ 121.6 us ... 127.0 us ]      +4.40%*
table_get_10                                       [ 112.1 us ... 117.4 us ]      +4.71%*
table_get_100                                      [ 124.0 us ... 129.5 us ]      +4.41%*
table_get_1000                                     [ 229.9 us ... 245.5 us ]      +6.75%*
table_select_1                                     [ 111.1 us ... 115.6 us ]      +4.03%*
table_select_10                                    [ 112.1 us ... 118.4 us ]      +5.65%*
table_select_100                                   [ 142.2 us ... 147.2 us ]      +3.53%*
table_select_1000                                  [ 383.5 us ... 370.3 us ]      -3.43%*
table_insert_row_1_1                               [ 122.5 us ... 127.8 us ]      +4.35%*
table_insert_row_10_1                              [ 123.6 us ... 128.6 us ]      +3.99%*
table_insert_row_100_1                             [ 124.9 us ... 131.2 us ]      +5.05%*
table_insert_row_1000_1                            [ 177.3 us ... 182.7 us ]      +3.07%*
table_insert_row_1_10                              [ 225.7 us ... 234.5 us ]      +3.90%*
table_insert_row_10_10                             [ 229.2 us ... 235.4 us ]      +2.69%*
table_insert_row_100_10                            [ 253.3 us ... 257.6 us ]      +1.69%
table_insert_row_1000_10                           [ 311.1 us ... 318.3 us ]      +2.32%*
table_insert_col_1_1                               [ 107.6 us ... 113.3 us ]      +5.35%*
table_insert_col_10_1                              [ 109.6 us ... 115.3 us ]      +5.27%*
table_insert_col_100_1                             [ 155.5 us ... 159.8 us ]      +2.71%*
table_insert_col_1000_1                            [ 476.6 us ... 480.8 us ]      +0.88%*
table_insert_col_1_10                              [ 167.7 us ... 177.4 us ]      +5.75%
table_insert_col_10_10                             [ 178.5 us ... 194.8 us ]      +9.16%*
table_insert_col_100_10                            [ 314.4 us ... 322.3 us ]      +2.53%*
table_insert_col_1000_10                           [   1.7 ms ...   1.7 ms ]      +1.35%*
eval_interleave_100                                [ 485.8 us ... 506.5 us ]      +4.27%
eval_interleave_1000                               [   3.3 ms ...   3.2 ms ]      -1.51%
eval_interleave_10000                              [  31.5 ms ...  31.0 ms ]      -1.80%
eval_interleave_with_interrupt_100                 [ 473.2 us ... 479.6 us ]      +1.35%
eval_interleave_with_interrupt_1000                [   3.2 ms ...   3.2 ms ]      -1.26%
eval_interleave_with_interrupt_10000               [  32.3 ms ...  31.1 ms ]      -3.81%
eval_for_1                                         [ 124.4 us ... 130.1 us ]      +4.60%*
eval_for_10                                        [ 124.4 us ... 130.3 us ]      +4.80%*
eval_for_100                                       [ 134.5 us ... 141.5 us ]      +5.18%*
eval_for_1000                                      [ 222.7 us ... 244.0 us ]      +9.59%*
eval_for_10000                                     [   1.0 ms ...   1.2 ms ]     +13.86%*
eval_each_1                                        [ 146.9 us ... 153.0 us ]      +4.15%*
eval_each_10                                       [ 152.3 us ... 158.8 us ]      +4.26%*
eval_each_100                                      [ 169.3 us ... 175.6 us ]      +3.76%*
eval_each_1000                                     [ 346.8 us ... 357.4 us ]      +3.06%*
eval_each_10000                                    [   2.1 ms ...   2.2 ms ]      +2.37%*
eval_par_each_1                                    [ 194.3 us ... 203.5 us ]      +4.73%*
eval_par_each_10                                   [ 186.4 us ... 193.1 us ]      +3.59%*
eval_par_each_100                                  [ 213.5 us ... 222.2 us ]      +4.08%*
eval_par_each_1000                                 [ 433.4 us ... 437.4 us ]      +0.93%
eval_par_each_10000                                [   2.4 ms ...   2.4 ms ]      -0.40%
eval_default_config                                [ 406.3 us ... 414.9 us ]      +2.12%*
eval_default_env                                   [ 475.7 us ... 495.1 us ]      +4.08%*
encode_json_100_5                                  [ 132.7 us ... 128.9 us ]      -2.88%*
encode_json_10000_15                               [  35.5 ms ...  34.0 ms ]      -4.15%*
encode_msgpack_100_5                               [  85.0 us ...  83.9 us ]      -1.20%*
encode_msgpack_10000_15                            [  22.2 ms ...  21.7 ms ]      -2.17%*
decode_json_100_5                                  [ 431.3 us ... 421.0 us ]      -2.40%*
decode_json_10000_15                               [ 119.7 ms ... 117.6 ms ]      -1.76%*
decode_msgpack_100_5                               [ 160.6 us ... 151.1 us ]      -5.90%*
decode_msgpack_10000_15                            [  44.0 ms ...  43.1 ms ]      -2.12%*
```

</details>

</details>

# User-Facing Changes
* It should be possible to ctrl-c in situations where it was not
previously possible

# Tests + Formatting
N/A

# After Submitting
N/A
2025-07-17 21:41:54 +08:00
7ce66a9b77 Also type-check optional arguments (#16194)
# Description
Type-check all closure arguments, not just required arguments.

Not doing so looks like an oversight.

# User-Facing Changes

Previously, passing an argument of the wrong type to a closure would
fail if the argument is required, but be accepted (ignoring the type
annotation) if the argument is optional:

```
> do {|x: string| $x} 4
Error: nu:🐚:cant_convert

  × Can't convert to string.
   ╭─[entry #13:1:21]
 1 │ do {|x: string| $x} 4
   ·                     ┬
   ·                     ╰── can't convert int to string
   ╰────
> do {|x?: string| $x} 4
4
> do {|x?: string| $x} 4 | describe
int
```

It now fails the same way in both cases.

# Tests + Formatting
Added tests, the existing tests still pass.

Please let me know if I added the wrong type of test or added them in
the wrong place (I didn't spot similar tests in the nu-cmd-lang crate,
so I put them next to the most-related existing tests I could find...

# After Submitting
I think this is minor enough it doesn't need a doc update, but please
point me in the right direction if not.
2025-07-17 21:38:08 +08:00
c2622589ef fix quotation rules (#16089)
# Description
This script as example for demonstration
```nushell
def miku [] { print "Hiii world! 初音ミクはみんなのことが大好きだよ!" }

def main [leek: int, fn: closure] {
  print $"Miku has ($leek) leeks 🩵"
  do $fn
}
```
---

`escape_for_string_arg` quoting strings misses, where `|` and `;` did
split the command into 2+ commands
```console
~:►  nu ./miku.nu '32' '{miku}'
Miku has 32 leeks 🩵
Hiii world! 初音ミクはみんなのことが大好きだよ!
~:►  nu ./miku.nu '32' '{miku};ls|' where type == dir 
Miku has 32 leeks 🩵
Hiii world! 初音ミクはみんなのことが大好きだよ!
╭────┬─────────────┬──────┬────────┬──────────────╮
│  # │    name     │ type │  size  │   modified   │
├────┼─────────────┼──────┼────────┼──────────────┤
│  0 │ Desktop     │ dir  │ 4.0 kB │ 5 months ago │
│  1 │ Documents   │ dir  │ 4.0 kB │ a day ago    │
│  2 │ Downloads   │ dir  │ 4.0 kB │ a day ago    │
│  3 │ Music       │ dir  │ 4.0 kB │ 9 months ago │
│  4 │ Pictures    │ dir  │ 4.0 kB │ 3 weeks ago  │
│  5 │ Public      │ dir  │ 4.0 kB │ 9 months ago │
│  6 │ Templates   │ dir  │ 4.0 kB │ 3 months ago │
│  7 │ Videos      │ dir  │ 4.0 kB │ 9 months ago │
│  8 │ __pycache__ │ dir  │ 4.0 kB │ 3 weeks ago  │
│  9 │ bin         │ dir  │ 4.0 kB │ 3 days ago   │
│ 10 │ repos       │ dir  │ 4.0 kB │ a day ago    │
│ 11 │ trash       │ dir  │ 4.0 kB │ 3 days ago   │
╰────┴─────────────┴──────┴────────┴──────────────╯
```

# User-Facing Changes

This adjustment won't need a change, aside from clarifying the following
behavior, which is unintuitive and potentially ambiguous

```console
~:►  nu ./miku.nu '32' {miku}
Miku has 32 leeks 🩵
Hiii world! 初音ミクはみんなのことが大好きだよ!
~:►  nu ./miku.nu '32' { miku }
Error: nu::parser::parse_mismatch

  × Parse mismatch during operation.
   ╭─[<commandline>:1:9]
 1 │ main 32 "{ miku }"
   ·         ─────┬────
   ·              ╰── expected block, closure or record
   ╰────

~:►  nu ./miku.nu '32' '{' miku '}'
Miku has 32 leeks 🩵
Hiii world! 初音ミクはみんなのことが大好きだよ!
```
2025-07-17 21:36:06 +08:00
1ba4fe0aac Use correct column name in history import -h example (#16190)
Closes #16189

See that issue for details.
2025-07-16 14:04:56 -05:00
5aa1ccea10 fix rest parameter spans (#16176)
Closes #16071

# Description

Rest parameter variables are now fully spanned, instead of just the
first value:

```diff
def foo [...rest] {
  metadata $rest | view span $in.span.start $in.span.end
}
foo 1 2

-1
+1 2
```
2025-07-16 13:02:48 +03:00
00e9e0e6a9 fix(overlay use): report errors in export-env (#16184)
- fixes #10242

# Tests + Formatting
Added 2 tests to confirm `use` and `overlay use` report errors in
`export-env` blocks.

---------

Co-authored-by: Bahex <17417311+Bahex@users.noreply.github.com>
2025-07-16 10:32:02 +03:00
db1ffe57d3 Panic when converting other I/O errors into our I/O errors (#16160)
# Description
I added a debug-only check that makes sure only non-other I/O errors get
converted.
Since tests run in debug mode, that should help us catch these errors early.

I left the check out in release mode on purpose so we don't crash users who
have nu as their main shell. It's similar to the debug assertions in the same
file that check for unknown spans.
2025-07-16 01:55:02 +03:00
2df00ff498 Revert "Add extern for nu command" (#16180)
Reverts nushell/nushell#16119
2025-07-15 19:18:44 +03:00
c4e8e040ce Add warning when using history isolation with non-SQLite history format (#16151)
# Description
This PR depends on #16147, use `git diff 132ikl/shell-warning
132ikl/isolation-warn` to see only changes from this PR

People seem to get tripped up by this a lot, and it's not exactly
intuitive, so I added a warning if you try to set
`$env.config.history.isolation = true` when using the plaintext file
format:

    Warning: nu:🐚:invalid_config

      ⚠ Encountered 1 warnings(s) when updating config

    Warning: nu:🐚:incompatible_options

      ⚠ Incompatible options
       ╭─[source:1:33]
     1 │ $env.config.history.isolation = true
       ·                                 ──┬─
       ·                                   ╰── history isolation only compatible with SQLite format
       ╰────
      help: disable history isolation, or set $env.config.history.file_format = "sqlite"


# User-Facing Changes
* Added a warning when using history isolation without using SQLite
history.

# Tests + Formatting
Added a test
2025-07-15 16:40:32 +03:00
59ad605e22 Add ShellWarning (#16147)
# Description
Adds a proper `ShellWarning` enum which has the same functionality as
`ParseWarning`.

Also moves the deprecation from #15806 into `ShellWarning::Deprecated`
with `ReportMode::FirstUse`, so that warning will only pop up once now.

# User-Facing Changes
Technically the change to the deprecation warning from #15806 is user
facing but it's really not worth listing in the changelog
2025-07-15 15:30:18 +03:00
5569f5beff Print errors during stdlib testing (#16128)
# Description
This PR adds error output display for stdlib tests. This makes it easier
to understand why a test is failing. Here's what an example error output
looks like:


![image](https://github.com/user-attachments/assets/ed06b53e-c3f9-4c56-a646-7e6486ce7ec6)


# User-Facing Changes
N/A

# Tests + Formatting
N/A

# After Submitting
N/A
2025-07-15 19:28:13 +08:00
4ed522db93 Add default error codes (#16166)
# Description
Before this PR, errors without error codes are printed somewhat
strangely, with the `×` and the description being printed on the same
line as the `Error:` text:

    Error:   × Invalid literal
       ╭─[entry #1:1:2]
     1 │ "\z"
       ·  ─┬─
       ·   ╰── unrecognized escape after '\' in string
       ╰────


This PR adds a default error code for the different error types:

    Error: nu::parser::error

      × Invalid literal
       ╭─[entry #1:1:2]
     1 │ "\z"
       ·  ─┬─
       ·   ╰── unrecognized escape after '\' in string
       ╰────

While maybe not as informative as a proper error code, it makes
`GenericError`s and other things which don't have error codes look a lot
nicer.

It would be nicer if we could just set `diagnostic(code:
"nu:🐚:error")` at the top of `ShellError`, but unfortunately you
can't set a "default" at the `enum` level and then override it in the
variants. @cptpiepmatz mentioned he might change miette's derive macro
to accommodate this, in that case we can switch the approach here.

# User-Facing Changes
* Errors without error codes now have a default error code corresponding
to from which part of Nushell the error occurred (shell, parser,
compile, etc)

---------

Co-authored-by: Bahex <17417311+Bahex@users.noreply.github.com>
2025-07-15 13:42:10 +03:00
beb3ec6a49 refactor(get,select,reject)!: deprecate --ignore-errors in favor of --optional (#16007)
# Description
As decided on the team meeting on 2025-06-19, rename `--ignore-errors
(-i)` to `--optional (-o)` with a (currently) indefinite grace period.

After `--ignore-errors (-i)` is removed, the short flag `-i` can be used
for `--ignore-case` (not implemented as of this PR)

# User-Facing Changes
`get`/`select`/`reject`: rename `--ignore-errors (-i)` to `--optional
(-o)` to better reflect its behavior.

# Tests + Formatting
- 🟢 toolkit fmt
- 🟢 toolkit clippy
- 🟢 toolkit test
- 🟢 toolkit test stdlib

# After Submitting
Update docs and inform third parties that integrate with nushell.

---------

Co-authored-by: Bahex <17417311+Bahex@users.noreply.github.com>
2025-07-15 00:26:41 -04:00
a506d3f9b5 fix(completion): put argument completion results before commands (#16112)
#16075

# Description

# User-Facing Changes

Improved experience: more meaningful items on top of the completion
menu.
Mostly for the fuzzy/substring matcher.

# Tests + Formatting

Adjusted

# After Submitting
2025-07-14 21:35:36 -04:00
316d2d6af2 Add extern for nu command (#16119)
# Description
Adds an `extern` definition (`KnownExternal`) for the `nu` command. This
way you can do `help nu` and get tab completions for flags on the `nu`
executable

# User-Facing Changes
* You can now view the flags for Nushell itself with `help nu`, and tab
completion for flags on `nu` works
2025-07-15 02:26:40 +03:00
202d3b2d11 Polars limit housekeeping (#16173)
# Description
Removed a todo and fixed an example.

---------

Co-authored-by: Jack Wright <jack.wright@nike.com>
2025-07-14 15:32:04 -07:00
288f419f7b print to a value should be a SIGPIPE (#16171)
# Description
Partially handles #14799 
It's difficult to fix all cases there, but I think it's good to improve
one of this with a small step

# User-Facing Changes
Given the following code in c:
```C
// write.c
#include <stdio.h>

int main() {
  while (1) {
    printf("a\n");
  }
}
```
After this pr, `./write | 0` will exit immediately, in that case,
`./write` will receive `SIGPIPE` in unix. Before this pr, `./write | 0`
runs indefinitely.

-----------------------

Maybe it's easier to see what happened internally by the different
output of `view ir { ./write | 0 }` command.
### Before
```
# 2 registers, 6 instructions, 7 bytes of data
   0: load-literal           %1, glob-pattern("./write", no_expand = false)
   1: push-positional        %1
   2: redirect-out           null
   3: call                   decl 135 "run-external", %0
   4: load-literal           %0, int(0)
   5: return                 %0
```

### After
```
# 2 registers, 6 instructions, 7 bytes of data
   0: load-literal           %1, glob-pattern("./write", no_expand = false)
   1: push-positional        %1
   2: redirect-out           pipe      # changed, the command's output is a pipe rather than null
   3: call                   decl 136 "run-external", %0
   4: load-literal           %0, int(0)
   5: return                 %0
```
# Tests + Formatting
Added 1 test.

# After Submitting
NaN
2025-07-14 13:26:51 -04:00
c272fa2b0a Update default (scaffold) config blurb (#16165)
Updates the header for the default generated config file to point people
towards `config nu --doc`, adapting some text from that command.
2025-07-14 11:48:06 -04:00
bfe9d8699d fix: unwrap ShellErrorBridge in ByteStream::into_bytes (#16161)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

- closes #16159 

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

The setup in #16159 produced a `ByteStream` that held the `ShellError`
inside an `std::io::Error`. For exactly this setup is the
`ShellErrorBridge` designed, while figuring out what happened I found
out that inside `BytesStream::into_bytes` a sneaky `ShellErrorBridge`
was hiding:
<img width="1034" height="580" alt="image"
src="https://github.com/user-attachments/assets/a0d16ca6-1a60-4c3c-a4cb-5e4b0695f20c"
/>

To fix this, I try to unwrap the bridge and if that is possible, return
the original `ShellError`. This is done at multiple occasions inside
`byte_stream.rs` already.

With this fix, we get the original error that looks like this:
<img width="1439" height="733" alt="image"
src="https://github.com/user-attachments/assets/73f4dee7-f986-4f68-9c2c-0140a6e9e2b2"
/>


# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

None, just a better error.

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->

To have less situations with other I/O errors, I opened #16160 to find
out faster what other error was passed around.
2025-07-13 12:53:55 -04:00
60ca889443 fix(overlay): overlay use and overlay hide now update config state (#16154)
- fixes #5986
- fixes #7760
- fixes #8856
- fixes #10592
- fixes #11082

# Description
Unconditionally update the config state after each `overlay use` and
`overlay hide`.

The fix looks simple, but only because of the constant improvements and
refactors to the codebase that have taken place over time made it
possible.

Fixing these issue when they were initially filed would have been much
harder.

# User-Facing Changes
Overlays can add hooks, change color_config, update the config in
general.

# Tests + Formatting
No tests added as I still haven't figured out how to simulate the repl
in tests.

# After Submitting
N/A

---------

Co-authored-by: Bahex <17417311+Bahex@users.noreply.github.com>
2025-07-13 02:31:16 +03:00
f48656e18b Bump update-informer to v1.3.0 (#16157)
# Description

This PR bumps `update-informer` to 1.3.0 and gets rid of
`NativeTlsHttpClient`.
2025-07-11 12:11:12 -05:00
172a0c44bd fix(get): to be consistent with regular cell-path access on null values (#16155)
# Description
Cell-path accesses on `null` values throw an error, unless the initial
cell-path member is optional:
```nushell
">"; (null).a
# => Error: nu:🐚:incompatible_path_access
# => 
# =>   x Data cannot be accessed with a cell path
# =>    ,-[source:1:8]
# =>  1 | (null).a
# =>    :        |
# =>    :        `-- nothing doesn't support cell paths
# =>    `----

">"; (null).a? | describe
# => nothing
```

`get` throws an error on `null` even when the cell-path is optional, and
only returns `null` quietly with the `--ignore-errors` flag
```nushell
">"; null | get a?
# => Error: nu:🐚:only_supports_this_input_type
# => 
# =>   x Input type not supported.
# =>    ,-[source:1:1]
# =>  1 | null | get a?
# =>    : ^^|^   ^|^
# =>    :   |     `-- only table or record input data is supported
# =>    :   `-- input type: nothing
# =>    `----

">"; null | get -i a? | describe
# => nothing
```

# Tests + Formatting
No breakage.

# After Submitting
N/A

Co-authored-by: Bahex <17417311+Bahex@users.noreply.github.com>
2025-07-11 12:32:31 -04:00
8979e3f5bf update to latest reedline (#16156)
# Description

This updates to the latest reedline 405299b to include
https://github.com/nushell/reedline/pull/898 for dogfooding.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2025-07-11 06:51:23 -05:00
1fcce4cffc fix(gstat): make state entry lowercase (#16153)
A follow up to #15965 based on [this
comment](https://github.com/nushell/nushell/pull/15965#issuecomment-3058106114).

# Description

Make Git state values lowercase instead of title case (as in
[upstream](https://docs.rs/git2/latest/git2/enum.RepositoryState.html)).

# User-Facing Changes

Different values.
2025-07-10 16:14:45 -05:00
8cf9efafbb build(deps): bump indicatif from 0.17.9 to 0.17.11 (#16136) 2025-07-10 19:22:29 +00:00
8943fcf78d Use Severity::Warning for miette warnings (#16146)
# Description


Changes miette warnings to use `Severity::Warning` instead of the
default `Severity::Error`. This shows them in a different color and with
a different icon:

<img width="650" height="266" alt="image"
src="https://github.com/user-attachments/assets/3ff0d3cf-ab1e-47f2-aff7-586ecea5a32a"
/>


# User-Facing Changes

* Warnings now look less like errors
2025-07-10 20:50:12 +02:00
33a2b98f66 fix typo on documentation about pipes (#16152)
these documentations say ">" when what they mean is ">>"

# Description

this documentation is about the use of ">>", but the documentation
wrongly says ">". it has been updated to say ">>" instead.

# User-Facing Changes

the user will see ">>" in the documentation instead of ">".

# Tests + Formatting

there is nothing to test. this is a documentation change.

# After Submitting
2025-07-10 12:16:02 -04:00
05e570aa71 polars: fix datetime type conversion (#16133)
# Description

Conversion from `AnyType::DatetimeOwned` was missing, so datetime
objects could not be represented in Nushell Values. This adds the
conversion.

A slight refactor of the `datetime_from_epoch_nanos` was needed.


# User-Facing Changes

All datetime types can be represented in Nushell.
2025-07-08 12:42:55 -07:00
d8255040f1 Added flag limit for polars arg-sort (#16132)
# Description
Exposes the polars sort option limit for the arg-sort command.

```nu
> ❯ : [1 2 2 3 3] | polars into-df | polars arg-sort --limit 2
╭───┬──────────╮
│ # │ arg_sort │
├───┼──────────┤
│ 0 │        0 │
│ 1 │        1 │
╰───┴──────────╯
```

 
# User-Facing Changes
- The `--limit` flag is now available for `polars arg-sort`

Co-authored-by: Jack Wright <jack.wright@nike.com>
2025-07-08 12:24:47 -05:00
4da755895d fix: panic of if command as a constant expr by bringing back Type::Block (#16122)
Fixes #16110. Alternative to #16120 

# Description

# User-Facing Changes

no more panic

# Tests + Formatting

+1

# After Submitting
2025-07-08 20:45:35 +08:00
a674ce2dbc Update winget default INSTALLDIR for per-machine install path (#16026)
# Description

- Update `winget` default INSTALLDIR for per-machine install path to `
C:\Program Files\nu`

# User-Facing Changes

These changes, along with the manifest update PR (such as [this
example](https://github.com/microsoft/winget-pkgs/pull/266009/files)),
will resolve [issue
#15949](https://github.com/nushell/nushell/issues/15949) and add
`--scope` flag support for `winget install`.

After these changes are merged, the following should occur:

1. When no `nu` is installed, `winget install --id Nushell.Nushell
--scope machine` will install Nushell to `C:\Program Files\nu`.
2. `winget update --id Nushell.Nushell` will upgrade Nushell to the
latest version with machine scope.
3. When no `nu` is installed, `winget install --id Nushell.Nushell` will
install Nushell to `%LOCALAPPDATA%\Programs\nu`.
4. Due to [winget-cli issue
#3011](https://github.com/microsoft/winget-cli/issues/3011), `winget
update --id Nushell.Nushell` will unexpectedly install the latest
version to `C:\Program Files\nu`. The workaround is to run `winget
install --id Nushell.Nushell` again to install the latest version for
user scope.

# Tests + Formatting

1. Nushell install from MSI tests:
https://github.com/nushell/integrations/actions/runs/16088967701
2. Nushell install by Winget tests:
https://github.com/nushell/integrations/actions/runs/16088814557
3. Nushell Upgrade by Winget tests:
https://github.com/nushell/integrations/actions/runs/16088967705 and
test script:
https://github.com/nushell/integrations/blob/main/tests/test-all.nu

# After Submitting

The manifest files need to be updated manually for the next release, and
I will do that.
2025-07-07 08:10:07 +08:00
71d78b41c4 nu-table: optimize table creation and width functions (#15900)
> Further tests are welcomed.

It was already implemented that we precalculate widths, but nothing
stops to do heights as well.
Because before all the calculus were wasted (literally).

It affects `table` and `table --expand`.
The only case when it does not work (even makes things slightly less
optimal in case of `table` when `truncation` is used)

Sadly my tests are not showing the clear benefit.
I have no idea why I was expecting something 😞 
But it must be there :)

Running `scope commands` + `$env.CMD_DURATION_MS`:

```log
# patch (release)
2355 2462 2210 2356 2303

# main (release)
2375 2240 2202 2297 2385
```

PS: as once mentioned all this stuff ought to be moved out `nu-table`

---------

Signed-off-by: Maxim Zhiburt <zhiburt@gmail.com>
2025-07-06 12:56:42 -05:00
647a740c11 Add all to enable all active experimental options (#16121)
- closes #16118 

<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

This PR adds the option to pass `all` to the experimental options
parser. This will enable all active (not deprecated) experimental
options to ease with dogfooding.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

A new valid value for `--experimental-options` and
`NU_EXPERIMENTAL_OPTIONS`.

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2025-07-06 02:34:27 -04:00
a317284db6 fix ansi --list missing new items (#16113)
# Description

This fixes an oversight where not all items show in `ansi --list`.

### Before

![image](https://github.com/user-attachments/assets/2fd60744-28e1-4769-b491-7ac92b8b96c6)

### After

![image](https://github.com/user-attachments/assets/422993a5-5f29-4c24-8d90-66b642d72fa4)



# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2025-07-04 22:05:35 -05:00
a4bd51a11d Fix type checking for assignment operators (#16107)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

Rel: #14429, #16079

Finishes up a TODO in the assignment type checking. 

- For regular assignment operations (only applies to `mut`), type
checking is now done using `type_compatible` (which is what `let` uses)
- This allows some mutable assignments to work which weren't allowed
before

Before:
```nushell
let x: glob = "" 
# => ok, no error
mut x: glob = ""; $x = ""
# => Error: nu::parser::operator_incompatible_types
# => 
# =>   × Types 'glob' and 'string' are not compatible for the '=' operator.
# =>    ╭─[entry #6:1:19]
# =>  1 │ mut x: glob = ""; $x = ""
# =>    ·                   ─┬ ┬ ─┬
# =>    ·                    │ │  ╰── string
# =>    ·                    │ ╰── does not operate between 'glob' and 'string'
# =>    ·                    ╰── glob
# =>    ╰────

let x: number = 1
# ok, no error
mut x: number = 1; $x = 2
# => Error: nu::parser::operator_incompatible_types
# => 
# =>   × Types 'number' and 'int' are not compatible for the '=' operator.
# =>    ╭─[source:1:20]
# =>  1 │ mut x: number = 1; $x = 2
# =>    ·                    ─┬ ┬ ┬
# =>    ·                     │ │ ╰── int
# =>    ·                     │ ╰── does not operate between 'number' and 'int'
# =>    ·                     ╰── number
# =>    ╰────
```

After:
```nushell
let x: glob = ""
# ok, no error (same as before)
mut x: glob = ""; $x = ""
# ok, no error

let x: number = 1
# ok, no error (same as before)
mut x: number = 1; $x = 2
# ok, no error
```

- Properly type check compound operations. First checks if the operation
(eg. `+` for `+=`) type checks successfully, and then checks if the
assignment type checks successfully (also using `type_compatible`)
- This fixes some issues where the "long version" of a compound
assignment operator would error, but the compound assignment operator
itself would not

Before:
```nushell
mut x = 1; $x = $x / 2
# => Error: nu::parser::operator_incompatible_types
# => 
# =>   × Types 'int' and 'float' are not compatible for the '=' operator.
# =>    ╭─[entry #15:1:12]
# =>  1 │ mut x = 1; $x = $x / 2
# =>    ·            ─┬ ┬ ───┬──
# =>    ·             │ │    ╰── float
# =>    ·             │ ╰── does not operate between 'int' and 'float'
# =>    ·             ╰── int
# =>    ╰────

mut x = 1; $x /= 2
# uh oh, no error...

mut x = (date now); $x = $x - 2019-05-10
# => Error: nu::parser::operator_incompatible_types
# => 
# =>   × Types 'datetime' and 'duration' are not compatible for the '=' operator.
# =>    ╭─[entry #1:1:21]
# =>  1 │ mut x = (date now); $x = $x - 2019-05-10
# =>    ·                     ─┬ ┬ ───────┬───────
# =>    ·                      │ │        ╰── duration
# =>    ·                      │ ╰── does not operate between 'datetime' and 'duration'
# =>    ·                      ╰── datetime
# =>    ╰────

mut x = (date now); $x -= 2019-05-10
# uh oh, no error... (the result of this is a duration, not a datetime)
```

After:
```nushell
mut x = 1; $x = $x / 2
# => Error: nu::parser::operator_incompatible_types
# => 
# =>   × Types 'int' and 'float' are not compatible for the '=' operator.
# =>    ╭─[entry #5:1:12]
# =>  1 │ mut x = 1; $x = $x / 2
# =>    ·            ─┬ ┬ ───┬──
# =>    ·             │ │    ╰── float
# =>    ·             │ ╰── does not operate between 'int' and 'float'
# =>    ·             ╰── int
# =>    ╰────

mut x = (date now); $x -= 2019-05-10
# => Error: nu::parser::operator_incompatible_types
# => 
# =>   × Types 'datetime' and 'datetime' are not compatible for the '-=' operator.
# =>    ╭─[entry #11:1:21]
# =>  1 │ mut x = (date now); $x -= 2019-05-10
# =>    ·                     ─┬ ─┬ ─────┬────
# =>    ·                      │  │      ╰── datetime
# =>    ·                      │  ╰── does not operate between 'datetime' and 'datetime'
# =>    ·                      ╰── datetime
# =>    ╰────
# =>   help: The result type of this operation is not compatible with the type of the variable.
```

This is technically a breaking change if you relied on the old behavior
(for example, there was a test that broke after this change because it
relied on `/=` improperly type checking)

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
* Mutable assignment operations now use the same type checking rules as
normal assignments
* For example, `$x = 123` now uses the same type checking rules as `let
x = 123` or `mut x = 123`
* Compound assignment operations now type check using the same rules as
the operation they use
* Assignment errors will also now highlight the invalid assignment
operator in red


# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
Adds some tests for the examples given above

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
N/A
2025-07-04 02:48:49 -04:00
f6d807bf36 'find --columns .. --regex ..' works. Change help message to match. (#16103)
# Description
find's help message for '--columns' claims incompatibility with
'--regex'. However, both a review of the code and a test shows that this
is not true, and the two flags are compatible.
This commit removes the incompatibility claim.

# User-Facing Changes
  * Help message of '--columns' flag in 'find' command is changed

# Tests + Formatting
Tested using 'toolkit check pr'. The stdlib tests don't pass on my
system, but they didn't before this commit either.

# After Submitting
This command isn't mentioned in the main text, but is mentioned in the
command reference. Should I rerun the command reference generator for
the website? or is that done automatically before releases?
2025-07-03 11:13:40 -05:00
020d1b17c5 feat: add ansi style reset codes (#16099)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
This is so the styling can be set and reset without resetting the color.

NB. I don't have any domain or language knowledge here so am trying to
stick with existing patterns but it might be good to find out why code
like `Style::new().bold().prefix()` was used instead of just the raw SGR
(Select Graphic Rendition) codes (eg. `"\x1b[1m"`)

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2025-07-03 10:37:21 -05:00
118857aedc fix(metadata set): return error when both --datasource-filepath and -datasource-ls are used (#16049)
# Description

This PR improves the `metadata set` command by returning a clear error
when both `--datasource-filepath` and `--datasource-ls` flags are used
together. These flags are meant to be mutually exclusive, and previously
this conflicting usage was silently ignored.

# User-Facing Changes

* Users will now see an error message if they use both
`--datasource-filepath` and `--datasource-ls` together in `metadata
set`.

# Tests + Formatting

* [x] Added test at
`crates/nu-command/tests/commands/debug/metadata_set.rs` to verify the
error behavior.
* [x] Ran `cargo fmt --all -- --check`
* [x] Ran `cargo clippy --workspace -- -D warnings -D
clippy::unwrap_used`
* [x] Ran `cargo test --workspace`


# After Submitting

N/A
2025-07-02 19:40:34 +02:00
a340e965e8 refactor(nu-command/parse)!: Return null for unmatched capture groups, rather than empty string (#16094)
Co-authored-by: Bahex <17417311+Bahex@users.noreply.github.com>
2025-07-02 20:32:52 +03:00
25a5e8d8e8 Allow enabling deprecated experimental options (#16096)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

This PR changes the behavior of #16028 to allow enabling experimental
options even if they are marked as deprecated.

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
2025-07-02 16:46:53 +02:00
2fe84bd197 Forward experimental options in toolkit run (#16095)
# Description
I use `toolkit run` to test PRs or my own code. Passing experimental
options to it makes this nicer if you're trying to test that out.

# User-Facing Changes


You can pass `--experimental-options` to `toolkit run`.

# Tests + Formatting


- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting
2025-07-02 14:20:11 +02:00
c95c1e845c perf: reorder cell-path member accesses to avoid clones (#15682)
Co-authored-by: Bahex <17417311+Bahex@users.noreply.github.com>
Co-authored-by: Piepmatz <git+github@cptpiepmatz.de>
2025-07-02 12:55:46 +02:00
105ec0c89f Allow dashes in experimental option identifiers (#16093)
# Description

In #16028 I also added a test to check that identifiers are valid to
ensure that we have consistency there. But I only checked for
alphanumeric strings as identifiers. It doesn't allow underscores or
dashes. @Bahex used in his PR about #15682 a dash to separate words. So
expanded the test to allow that.

# User-Facing Changes

None.

# Tests + Formatting

The `assert_identifiers_are_valid` now allows dashes.

# After Submitting

The tests in #15682 should work then.
2025-07-02 13:31:01 +03:00
2c1b787db5 build(deps): bump indexmap from 2.9.0 to 2.10.0 (#16087) 2025-07-02 07:23:08 +00:00
fdb677e932 build(deps): bump quick-xml from 0.37.1 to 0.37.5 (#16086) 2025-07-02 07:22:37 +00:00
a18ff1d3a2 build(deps): bump crate-ci/typos from 1.33.1 to 1.34.0 (#16088)
Bumps [crate-ci/typos](https://github.com/crate-ci/typos) from 1.33.1 to
1.34.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/crate-ci/typos/releases">crate-ci/typos's
releases</a>.</em></p>
<blockquote>
<h2>v1.34.0</h2>
<h2>[1.34.0] - 2025-06-30</h2>
<h3>Features</h3>
<ul>
<li>Updated the dictionary with the <a
href="https://redirect.github.com/crate-ci/typos/issues/1309">June
2025</a> changes</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/crate-ci/typos/blob/master/CHANGELOG.md">crate-ci/typos's
changelog</a>.</em></p>
<blockquote>
<h2>[1.34.0] - 2025-06-30</h2>
<h3>Features</h3>
<ul>
<li>Updated the dictionary with the <a
href="https://redirect.github.com/crate-ci/typos/issues/1309">June
2025</a> changes</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="392b78fe18"><code>392b78f</code></a>
chore: Release</li>
<li><a
href="34b60f1f88"><code>34b60f1</code></a>
chore: Release</li>
<li><a
href="8b9670a614"><code>8b9670a</code></a>
docs: Update changelog</li>
<li><a
href="a6e61180eb"><code>a6e6118</code></a>
Merge pull request <a
href="https://redirect.github.com/crate-ci/typos/issues/1332">#1332</a>
from epage/juune</li>
<li><a
href="92f481e38a"><code>92f481e</code></a>
feat(dict): June 2025 updates</li>
<li><a
href="fb1f645959"><code>fb1f645</code></a>
chore(deps): Update Rust Stable to v1.88 (<a
href="https://redirect.github.com/crate-ci/typos/issues/1330">#1330</a>)</li>
<li><a
href="ebc6aac34e"><code>ebc6aac</code></a>
Merge pull request <a
href="https://redirect.github.com/crate-ci/typos/issues/1327">#1327</a>
from not-my-profile/fix-typo-in-error</li>
<li><a
href="e359d71a7f"><code>e359d71</code></a>
fix(cli): Correct config field reference in error message</li>
<li><a
href="022bdbe8ce"><code>022bdbe</code></a>
chore(ci): Update from windows-2019</li>
<li><a
href="ed74f4ebbb"><code>ed74f4e</code></a>
chore(ci): Update from windows-2019</li>
<li>Additional commits viewable in <a
href="https://github.com/crate-ci/typos/compare/v1.33.1...v1.34.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=crate-ci/typos&package-manager=github_actions&previous-version=1.33.1&new-version=1.34.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-02 14:24:45 +08:00
a86a0dd16e Add infrastructure for experimental options (#16028)
Co-authored-by: Bahex <Bahex@users.noreply.github.com>
2025-07-01 18:36:51 +02:00
f4136aa3f4 Add pipeline span to metadata (#16014)
# Description

This PR makes the span of a pipeline accessible through `metadata`,
meaning it's possible to get the span of a pipeline without collecting
it.

Examples:
```nushell
ls | metadata
# => ╭────────┬────────────────────╮
# => │        │ ╭───────┬────────╮ │
# => │ span   │ │ start │ 170218 │ │
# => │        │ │ end   │ 170220 │ │
# => │        │ ╰───────┴────────╯ │
# => │ source │ ls                 │
# => ╰────────┴────────────────────╯
```

```nushell
ls | metadata access {|meta|
  error make {msg: "error", label: {text: "here", span: $meta.span}}
}
# => Error:   × error
# =>    ╭─[entry #7:1:1]
# =>  1 │ ls | metadata access {|meta|
# =>    · ─┬
# =>    ·  ╰── here
# =>  2 │   error make {msg: "error", label: {text: "here", span: $meta.span}}
# =>    ╰────
```

Here's an example that wouldn't be possible before, since you would have
to use `metadata $in` to get the span, collecting the (infinite) stream

```nushell
generate {|x=0| {out: 0, next: 0} } | metadata access {|meta|
  # do whatever with stream
  error make {msg: "error", label: {text: "here", span: $meta.span}}
}
# => Error:   × error
# =>    ╭─[entry #16:1:1]
# =>  1 │ generate {|x=0| {out: 0, next: 0} } | metadata access {|meta|
# =>    · ────┬───
# =>    ·     ╰── here
# =>  2 │   # do whatever with stream
# =>    ╰────
```

I haven't done the tests or anything yet since I'm not sure how we feel
about having this as part of the normal metadata, rather than a new
command like `metadata span` or something. We could also have a
`metadata access` like functionality for that with an optional closure
argument potentially.

# User-Facing Changes

* The span of a pipeline is now available through `metadata` and
`metadata access` without collecting a stream.

# Tests + Formatting

TODO

# After Submitting

N/A
2025-06-30 23:17:43 +02:00
082e8d0de8 update rust version 1.86.0 (#16077)
# Description

This PR updates nushell to use rust version 1.86.0
2025-06-30 15:28:38 +02:00
9da0f41ebb Fix easy clippy lints from latest stable (#16053)
1.88.0 was released today, clippy now lints (machine-applicable)
against:
- format strings with empty braces that could be inlined
  - easy win
- `manual_abs_diff`
- returning of a stored result of the last expression.
  - this can be somewhat contentious but touched only a few places
2025-06-29 17:37:17 +02:00
372d576846 fix(std/help): add debug -v to string default parameters (#16063)
# Description
Added `debug -v` in case the default parameter is a string so that it
will be not be printed literally:
- Before
```nu
  --char: <string> (default:  )
```
```nu
  --char: <string> (default:
)
```
```nu
  --char: <string> (default: abc)
```
- After
```nu
  --char: <string> (default: " ")
```
```nu
  --char: <string> (default: "\n")
```
```nu
  --char: <string> (default: "abc")
```
Other types like `int` remain unaffected.
# User-Facing Changes

# Tests + Formatting

# After Submitting
2025-06-29 17:35:25 +02:00
c795f16143 If save-ing with non-existing parent dir, return directory_not_found (#15961) 2025-06-29 17:15:15 +02:00
a4a3c514ba Bump strip-ansi-escapes to deduplicate vte (#16054)
Updating here deduplicates `vte` which is also depended on by `ansitok`
from the `tabled`/zhiburt-cinematic-universe
2025-06-26 23:31:07 +02:00
5478ec44bb to <format>: preserve round float numbers' type (#16016)
- fixes #16011

# Description
`Display` implementation for `f64` omits the decimal part for round
numbers, and by using it we did the same.
This affected:
- conversions to delimited formats: `csv`, `tsv`
- textual formats: `html`, `md`, `text`
- pretty printed `json` (`--raw` was unaffected)
- how single float values are displayed in the REPL

> [!TIP]
> This PR fixes our existing json pretty printing implementation.
> We can likely switch to using serde_json's impl using its
PrettyFormatter which allows arbitrary indent strings.

# User-Facing Changes
- Round trips through `csv`, `tsv`, and `json` preserve the type of
round floats.
- It's always clear whether a number is an integer or a float in the
REPL
  ```nushell
  4 / 2
  # => 2  # before: is this an int or a float?

  4 / 2
  # => 2.0  # after: clearly a float
  ``` 

# Tests + Formatting
Adjusted tests for the new behavior.

- 🟢 toolkit fmt
- 🟢 toolkit clippy
- 🟢 toolkit test
- 🟢 toolkit test stdlib

# After Submitting
N/A

---------

Co-authored-by: Bahex <17417311+Bahex@users.noreply.github.com>
2025-06-26 15:15:19 -05:00
6902bbe547 Add tango folder to .gitignore (#16052)
This is the folder used by `toolkit.nu` when invoking the tango commands
2025-06-26 21:43:07 +02:00
4e5da8cd91 default config: add note for figuring out datetime escape sequences (#16051)
# Description

There was no hint as to what datetime escape sequences are supported,
previously. Looked into the source code to figure this out, which is not
great ux hehehe

# User-Facing Changes

# Tests + Formatting


# After Submitting


---------

Co-authored-by: Darren Schroeder <343840+fdncred@users.noreply.github.com>
2025-06-26 21:42:06 +02:00
d248451428 Update which from 7.0.3 to 8.0.0 (#16045)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
This simply updates the `which` dependency from 7.0.3 to 8.0.0, with no
code changes. See
https://github.com/harryfei/which-rs/releases/tag/8.0.0 for release
notes.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

N/A

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

Tested with `cargo test --workspace` and `cargo run -- -c "use
toolkit.nu; toolkit test stdlib"`.

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->

N/A
2025-06-25 19:24:31 -05:00
3e758e899f update nushell to latest reedline e4221b9 (#16044)
# Description

This PR just updates nushell to the latest reedline commit e4221b9 so we
can dogfood the most recent changes.
2025-06-25 23:49:26 +02:00
f69a812055 fix(hooks): updating $env.config now correctly updates config state. (#16021)
- fixes #14946

- related #15227
- > [When I run nushell with the hook, the hook itself works as
expected, correctly detects system theme and changes
$env.config.color_config. However, it seems that the change to
$env.config.color_config is not propagated outside the
hook](https://github.com/nushell/nushell/issues/15227#issuecomment-2695287318)
- > [But it suffers from the same problem - modifications made to the
$env.config variable are not visible outside of the hook (which I'm not
sure if is correct behavior or
bug).](https://github.com/nushell/nushell/issues/15227#issuecomment-2695741542)
- > [I also managed to get it working with def --env, but there was one
more issue, I had to change $env.config.hooks.pre_prompt = [{
switch_theme }] into $env.config.hooks.pre_execution = ([ switch_theme
])](https://github.com/nushell/nushell/issues/15227#issuecomment-2704537565)
(having to use a string hook rather than a closure)

- related #11082
  > Might be possible solve or at least mitigate using a similar method

# Description

Recently realized that changes made to `$env.config` in closure hooks
don't take effect whereas string hooks don't have that problem.

After some investigation:
- Hooks' environment was not preserved prior to #5982 >
[2309601](2309601dd4/crates/nu-cli/src/repl.rs (L823-L840))
- `redirect_env` which properly updates the config state was implemented
afterwards in #6355 >
[ea8b0e8](ea8b0e8a1d/crates/nu-engine/src/eval.rs (L174-L190))

Simply using `nu_engine::eval::redirect_env` for the environment update
was enough to fix the issue.

# User-Facing Changes
Hooks can update `$env.config` and the configuration change will work as
expected.

# Tests + Formatting

- 🟢 toolkit fmt
- 🟢 toolkit clippy
- 🟢 toolkit test
- 🟢 toolkit test stdlib

# After Submitting
N/A

Co-authored-by: Bahex <17417311+Bahex@users.noreply.github.com>
2025-06-25 23:22:43 +02:00
6fba4b409e Add backtick code formatting to help (#15892)
# Description
Adds formatting for code in backticks in `help` output. If it's possible
to highlight syntax (`nu-highlight` is available and there's no invalid
syntax) then it's highlighted. If the syntax is invalid or not an
internal command, then it's dimmed and italicized. like some of the
output from `std/help`. If `use_ansi_coloring` is `false`, then we leave
the backticks alone. Here's a couple examples:


![image](https://github.com/user-attachments/assets/57eed1dd-b38c-48ef-92c6-3f805392487c)


![image](https://github.com/user-attachments/assets/a0efa0d7-fc11-4702-973b-a0b448c383e0)

(note on this one: usually we can highlight partial commands, like `get`
in the `select` help page which is invalid according to `nu-check` but
is still properly highlighted, however `where` is special cased and just
typing `where` with no row condition is highlighted with the garbage
style so `where` alone isn't highlighted here)

![image](https://github.com/user-attachments/assets/28c110c9-16c4-4890-bc74-6de0f2e6d1b8)

here's the `where` page with `$env.config.use_ansi_coloring = false`:

![image](https://github.com/user-attachments/assets/57871cc8-d509-4719-9dd4-e6f24f9d891c)


Technically, some syntax is valid but isn't really "Nushell code". For
example, the `select` help page has a line that says "Select just the
\`name\` column". If you just type `name` in the REPL, Nushell treats it
as an external command, but for the purposes of highlighted we actually
want this to fall back to the generic dimmed/italic style. This is
accomplished by temporarily setting the `shape_external` and
`shape_externalarg` color config to the generic/fallback style, and then
restoring the color config after highlighting. This is a bit hack-ish
but it seems to work pretty well.


# User-Facing Changes

- `help` command now supports code backtick formatting. Code will be
highlighted using `nu-highlight` if possible, otherwise it will fall
back to a generic format.
- Adds `--reject-garbage` flag to `nu-highlight` which will return an
error on invalid syntax (which would otherwise be highlighted with
`$env.config.color_config.shape_garbage`)

# Tests + Formatting

Added tests for the regex. I don't think tests for the actual
highlighting are very necessary since the failure mode is graceful and
it would be difficult to meaningfully test.

# After Submitting

N/A

---------

Co-authored-by: Piepmatz <git+github@cptpiepmatz.de>
2025-06-25 21:26:52 +02:00
cb7ac9199d Stream lazy default output (#15955)
It was brought up in the Discord that `default { open -r foo.txt }`
results in a string instead of streaming output. This changes `default`
such that closures now stream when given simple input.

# Description
If the value isn't expected to be cached, `default` just runs the
closure without caching the value, which allows its output to be
streamed

# User-Facing Changes


# Tests + Formatting
👍 

# After Submitting
2025-06-24 19:17:33 -04:00
a6b8e2f95c Update the behaviour how paths are interpreted in start (#16033)
Closes: https://github.com/nushell/nushell/issues/13127

# Description

This PR updates the behaviour of `start` in the following ways:
Instead of joining the path with CWD, we expand the path.

Behaviour on `origin/main`:
```
nushell> ls ~/nushell-test
test.txt

nushell> start ~/nushell-test/test.txt
Error:   × Cannot find file or URL: ~/nushell-test/test.txt
...
help: Ensure the path or URL is correct and try again.
```

Behaviour in this PR:
```
nushell> ls ~/nushell-test
test.txt

nushell> start ~/nushell-test/test.txt
<opens text editor>
```

# User-Facing Changes

`start` now treats the input path differently. This is a breaking
change, I believe. Although I'm not sure how breaking it would be in the
perspective of the user.

# Tests + Formatting

I've manually tested this. The test suite for `start` is broken. And
even if I fix it, I'm not sure how to test it.
I'll need to override the default command list for `start` in the
sandbox for testing.

# After Submitting

I don't think the documentation needs to be updated.
2025-06-24 17:29:10 -05:00
0b202d55f0 Add only command to std-rfc/iter (#16015)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

This PR adds the `only` command to `std-rfc/iter`, which is a command I
wrote a while ago that I've found so useful that I think it could have a
place in the standard library. It acts similarly to `get 0`, but ensures
that the value actually exists, and there aren't additional values. I
find this most useful when chained with `where`, when you want to be
certain that no additional elements are accidentally selected when you
only mean to get a single element.

I'll copy the help page here for additional explanation:

> Get the only element of a list or table, ensuring it exists and there
are no extra elements.
> 
> Similar to `first` with no arguments, but errors if there are no
additional
> items when there should only be one item. This can help avoid issues
when more
> than one row than expected matches some criteria.
> 
> This command is useful when chained with `where` to ensure that only
one row
> meets the given condition.
> 
> If a cell path is provided as an argument, it will be accessed after
the first
> element. For example, `only foo` is roughly equivalent to `get 0.foo`,
with
> the guarantee that there are no additional elements.
> 
> Note that this command currently collects streams.

> Examples:
>  
> Get the only item in a list, ensuring it exists and there's no
additional items
> ```nushell
> [5] | only
> # => 5
> ```
> 
> Get the `name` column of the only row in a table
> ```nushell
> [{name: foo, id: 5}] | only name
> # => foo
> ```
> 
> Get the modification time of the file named foo.txt
> ```nushell
> ls | where name == "foo.txt" | only modified
> ```

Here's some additional examples showing the errors:

![image](https://github.com/user-attachments/assets/d5e6f202-db52-42e4-a2ba-fb7c4f1d530a)


![image](https://github.com/user-attachments/assets/b080da2a-7aff-48a9-a523-55c638fdcce3)

Most of the time I chain this with a simple `where`, but here's a couple
other real world examples of how I've used this:

[With `parse`, which outputs a
table](https://git.ikl.sh/132ikl/dotfiles/src/branch/main/.scripts/manage-nu#L53):
```nushell
let commit = $selection | parse "{start}.g{commit}-{end}" | only commit
```

[Ensuring that only one row in a table has a name that ends with a
certain
suffix](https://git.ikl.sh/132ikl/dotfiles/src/branch/main/.scripts/btconnect):
```nushell
$devices | where ($chosen_name ends-with $it.name) | only
```


Unfortunately to get these nice errors I had to collect the stream (and
I think the errors are more useful for this). This should be to be
mitigated with (something like) #16014.


Putting this in `std/iter` might be pushing it, but it seems *just*
close enough that I can't really justify putting it in a different/new
module.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
* Adds the `only` command to `std-rfc/iter`, which can be used to ensure
that a table or list only has a single element.

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

Added a few tests for `only` including error cases

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
N/A

---------

Co-authored-by: Bahex <Bahex@users.noreply.github.com>
2025-06-23 16:29:58 -05:00
e88a6bff60 polars 0.49 upgrade (#16031)
# Description
Polars 0.49 upgrade

Co-authored-by: Jack Wright <jack.wright@nike.com>
2025-06-23 16:17:39 -05:00
a234e6ff51 feat(std/help): add is_const information (#16032)
# Description

I wanted to know if `version` is a const command and thought that it
would be in the "This command" section but it wasn't, so I added it.
```
→ help version
Display Nu version, and its build configuration.

Category: core

This command:
 Creates scope         | 
 Is built-in           | 
 Is const              | 
 Is a subcommand       | 
 Is a part of a plugin | 
 Is a custom command   | 
 Is a keyword          | 
```
2025-06-23 23:22:58 +03:00
ae0cf8780d fix(random dice): gracefully handle --sides 0 using NonZeroUsize (#16001) 2025-06-23 14:47:50 +02:00
680a2fa2aa Add loongarch64-unknown-linux-musl build target (#16020) 2025-06-23 06:22:25 +08:00
70277cc2ba fix(std/help): collect windows --help output for gui programs (#16019)
# Description
Adding to #15962, I have realized that there are windows gui programs
like `prismlauncher` or `firefox` that do accept the `--help` flag but
won't output on the terminal unless `collect`ed, so now it collects the
output on windows.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2025-06-21 20:54:31 -05:00
574106bc03 allow update cells to work on single records (#16018)
# Description
The `update cells` command used to "work" on records by converting them
into single-row tables in the past, but strengthened input type checking
made it so that no longer worked. This commit introduces correct record
-> record functionality.

# User-Facing Changes
Users can now pipe records into `update cells`. An example inspired by a
conversation in the Discord:
```nushell
> version | update cells { split row ', ' } -c [features, installed_plugins]
╭────────────────────┬──────────────────────────────────────────╮
│ version            │ 0.105.2                                  │
│ major              │ 0                                        │
│ minor              │ 105                                      │
│ patch              │ 2                                        │
│ branch             │ update-cells-record                      │
│ commit_hash        │ 4f7e9aac62 │
│ build_os           │ macos-x86_64                             │
│ build_target       │ x86_64-apple-darwin                      │
│ rust_version       │ rustc 1.85.1 (4eb161250 2025-03-15)      │
│ rust_channel       │ 1.85.1-x86_64-apple-darwin               │
│ cargo_version      │ cargo 1.85.1 (d73d2caf9 2024-12-31)      │
│ build_time         │ 2025-06-21 12:02:06 -04:00               │
│ build_rust_channel │ debug                                    │
│ allocator          │ standard                                 │
│                    │ ╭───┬───────────────╮                    │
│ features           │ │ 0 │ default       │                    │
│                    │ │ 1 │ plugin        │                    │
│                    │ │ 2 │ rustls-tls    │                    │
│                    │ │ 3 │ sqlite        │                    │
│                    │ │ 4 │ trash-support │                    │
│                    │ ╰───┴───────────────╯                    │
│                    │ ╭───┬─────────────────╮                  │
│ installed_plugins  │ │ 0 │ formats 0.104.0 │                  │
│                    │ │ 1 │ polars 0.104.0  │                  │
│                    │ │ 2 │ query 0.104.0   │                  │
│                    │ │ 3 │ todotxt 0.3.0   │                  │
│                    │ ╰───┴─────────────────╯                  │
╰────────────────────┴──────────────────────────────────────────╯
```

# Tests + Formatting
👍. Let me know if more tests besides the new example are needed.

# After Submitting
2025-06-21 20:53:45 -05:00
2a8364d259 drop nth command supports spreadable arguments (#15897)
##  Improve `drop nth` command to support spreadable arguments

### Summary

This PR updates the `drop nth` command to support **spreadable
arguments** in a way consistent with other commands like `which`,
enabling:

```nu
[1 2 3 4 5] | drop nth 0 2 4
```

### What's Changed

* **Previously**: only a single index or a single range was accepted as
the first argument, with rest arguments ignored for ranges.

* **Now**: the command accepts any combination of:

  * Integers: to drop individual rows
  * Ranges: to drop slices of rows
  * Unbounded ranges: like `3..`, to drop from index onward

Example:

```nu
[one two three four five six] | drop nth 0 2 4..5
# drops "one", "three", "five", and "six"
```

### Test 

Manual Test:

![nu-dron_n](https://github.com/user-attachments/assets/02f3988c-ac02-4245-967c-16a9604be406)


### Notes

As per feedback:

* We **only collect the list of indices** to drop, not the input stream.
* Unbounded ranges are handled by terminating the stream early.

Let me know if you'd like further changes

---------

Co-authored-by: Kumar Ujjawal <kumar.ujjawal@greenpista.com>
Co-authored-by: Kumar Ujjawal <kumarujjawal@Kumars-MacBook-Air.local>
2025-06-21 15:57:14 -04:00
760c9ef2e9 fix(completion): invalid prefix for external path argument with spaces (#15998)
Fixes the default behavior of #15790 

# Description

As for the mentioned carapace version: `cat ~"/Downloads/Obsidian
Vault/"`, the problem lies in the unexpanded home directory `~`. Either
we encourage users to manually expand that in
`$env.config.completions.external.completer` or open an issue on the
carapace project.

# User-Facing Changes

bug fix

# Tests + Formatting

Adjusted

# After Submitting
2025-06-20 21:33:01 -04:00
c3079a14d9 feat(table): add 'double' table mode (#16013)
# Description

Add 'double' table mode, that is similar to `compact_double` but with
left and right border lines. This is similar to how there exist both
`single` and `compact`, but there is no `double` to compliment
`compact_double`. Printing `[ { a: 1, b: 11 }, { a: 2, b:12 } ]` looks
like this:

```
╔═══╦═══╦════╗
║ # ║ a ║ b  ║
╠═══╬═══╬════╣
║ 0 ║ 1 ║ 11 ║
║ 1 ║ 2 ║ 12 ║
╚═══╩═══╩════╝
```

The implementation is mostly a one-to-one of #15672 and #15681.

# User-Facing Changes

New value `double` to set as `$env.config.table.mode`.

# Tests + Formatting

Tests are added following the example of adding 'single' mode.

# After Submitting
2025-06-20 21:09:55 +02:00
4f7e9aac62 fix LS_COLORS fi=0 coloring (#16012)
# Description

fixes #16010

When `$env.LS_COLORS = 'fi=0' and `$env.config.color_config.string =
'red'` were set, regular files without file extensions would be colored
red. Now they're colored based on the LS_COLORS definition which, in
this case, means use default colors.

This is done by checking if a style was applied from ls_colors and if
none was applied, create a default nu_ansi_term style with
'Color::Default' for foreground and background.

### Before

![image](https://github.com/user-attachments/assets/ff245ee9-3299-4362-9df7-95613e8972ed)

### After

![image](https://github.com/user-attachments/assets/7c3f1178-6e6b-446d-b88c-1a5b0747345d)



# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->

---------

Co-authored-by: Bahex <Bahex@users.noreply.github.com>
2025-06-20 08:09:59 -05:00
7ee8aa78cc perf: better scalability of get_columns (#15780)
# Description

Use hashset for existence checking.
Still needs a vector collection to keep the column order for tables.

# User-Facing Changes

Should be None
2025-06-20 05:07:23 -05:00
d9d022733f feat(std/help): Add --help for external-commands (#15962)
# Description
I have just discovered the `std/help` command and that it can use `man`
or other programs for externals. Coming from windows, I don't have `man`
so what I want is just to run `external_program --help` in most cases.
This pr adds that option, if you set `$env.NU_HELPER = "--help"`, it
will run the command you passed with `--help` added as the last
argument.


![image](https://github.com/user-attachments/assets/60d25dda-718b-4cb5-b540-808de000b221)

# User-Facing Changes
None

# Tests + Formatting


# After Submitting
2025-06-20 05:06:27 -05:00
1d032ce80c Support namespaces in query xml (#16008)
Refs #15992
Refs #14457 

# Description

This PR introduces a new switch for `query xml`, `--namespaces`,
and thus allows people to use namespace prefixes in the XPath query
to query namespaced XML.

Example:
```nushell
r#'
   <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">
      <rdf:Description rdf:about=""
            xmlns:dc="http://purl.org/dc/elements/1.1/"
         <dc:title>Black-breasted buzzard_AEB_IMG_7158</dc:title>
      </rdf:Description>
   </rdf:RDF>
'# | query xml --namespaces {dublincore: "http://purl.org/dc/elements/1.1/"} "//dublincore:title/text()"
```

# User-Facing Changes

New switch added to `query xml`: `query xml --namespaces {....}`

# Tests + Formatting

Pass.

# After Submitting

IIRC the commands docs on the website are automatically generated, so
nothing to do here.
2025-06-19 17:58:26 -05:00
975a89269e repect color_config.header color for record key (#16006)
# Description

This PR fixes an oversight where the record key value was not being
colored as the color_config.header color when used with the `table`
command in some circumstances. It respected it with `table -e` but just
not `table`.

### Before

![image](https://github.com/user-attachments/assets/a41e609f-9b3a-415b-af90-037e6ee47318)

### After

![image](https://github.com/user-attachments/assets/c3afb293-ebb3-4cb3-8ee6-4f7e2e96723b)


# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2025-06-19 11:22:18 -05:00
db5b6c790f Fix: missing installed_plugins in version (#16004)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

- related #15972

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

In #15972 I was very eager removing fowarded features from `nu` to
`nu-cmd-lang`. By accident I also removed `nu-cmd-lang/plugin` too. This
removed `installed_plugins` from `version`. By adding the feature again,
it works again.
2025-06-19 07:48:20 -05:00
2bed202b82 Add backtrack named flag to parse (issue #15997) (#16000)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

Addresses #15997

Adds a `--backtrack` or `-b` named flag to the `parse` command. Allows a
user to specify a max backtrack limit for fancy-regex other than the
default 1,000,000 limit.

Uses a RegexBuilder to add the manual config.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

Adds a new named flag `backtrack` to the `parse` command. The flag is
optional and defaults to 1,000,000.

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

Added an example test to the parse command using `--backtrack 1500000`.
2025-06-19 06:42:30 -05:00
8a0f2ca9f9 Bump calamine to 0.28 (#16003) 2025-06-19 13:37:36 +02:00
24ab294cda Use CARGO_CFG_FEATURE to get feature list in version (#15972) 2025-06-19 12:58:37 +02:00
bfa95bbd24 Clearer help section for command attributes (#15999)
Refs
https://discord.com/channels/601130461678272522/615329862395101194/1385021428314800148

# Description

Clearer command attribute section (`This command:`).


![image](https://github.com/user-attachments/assets/7f26c015-1f00-4a86-a334-c87f7756ee82)


# User-Facing Changes

This is a cosmetic change to how `std/help` shows the command
attributes.

# Tests + Formatting

Pass.
2025-06-18 20:54:50 -05:00
3f700f03ad Generalize nu_protocol::format_shell_error (#15996) 2025-06-18 22:16:01 +02:00
f0e90a3733 Move nu_command::platform::ansi to nu_command::strings::ansi (#15995) 2025-06-18 21:51:16 +02:00
cde8a629c5 Restrict config.show_banner to valid options (#15985) 2025-06-18 10:49:40 +02:00
70aa7ad993 Disallow clippy::used_underscore_binding lint (#15988) 2025-06-18 10:19:57 +02:00
29b3512494 build(deps): bump shadow-rs from 1.1.1 to 1.2.0 (#15989)
Bumps [shadow-rs](https://github.com/baoyachi/shadow-rs) from 1.1.1 to
1.2.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/baoyachi/shadow-rs/releases">shadow-rs's
releases</a>.</em></p>
<blockquote>
<h2>v1.2.0</h2>
<h2>What's Changed</h2>
<ul>
<li>add cargo_metadata crate unit test by <a
href="https://github.com/baoyachi"><code>@​baoyachi</code></a> in <a
href="https://redirect.github.com/baoyachi/shadow-rs/pull/231">baoyachi/shadow-rs#231</a></li>
<li>Update cargo_metadata requirement from 0.19.1 to 0.20.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/baoyachi/shadow-rs/pull/229">baoyachi/shadow-rs#229</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/baoyachi/shadow-rs/compare/v1.1.1...v1.2.0">https://github.com/baoyachi/shadow-rs/compare/v1.1.1...v1.2.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="f0d180ac92"><code>f0d180a</code></a>
Update Cargo.toml</li>
<li><a
href="d106a172ad"><code>d106a17</code></a>
Merge pull request <a
href="https://redirect.github.com/baoyachi/shadow-rs/issues/229">#229</a>
from baoyachi/dependabot/cargo/cargo_metadata-0.20.0</li>
<li><a
href="7861af1dd0"><code>7861af1</code></a>
Merge branch 'master' into dependabot/cargo/cargo_metadata-0.20.0</li>
<li><a
href="ab73c01cd1"><code>ab73c01</code></a>
Merge pull request <a
href="https://redirect.github.com/baoyachi/shadow-rs/issues/231">#231</a>
from baoyachi/cargo_metadata</li>
<li><a
href="ff1a1dcf27"><code>ff1a1dc</code></a>
fix cargo clippy check</li>
<li><a
href="f59bceaf92"><code>f59bcea</code></a>
add cargo_metadata crate unit test</li>
<li><a
href="5c5b556400"><code>5c5b556</code></a>
Update cargo_metadata requirement from 0.19.1 to 0.20.0</li>
<li>See full diff in <a
href="https://github.com/baoyachi/shadow-rs/compare/v1.1.1...v1.2.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=shadow-rs&package-manager=cargo&previous-version=1.1.1&new-version=1.2.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-18 11:34:35 +08:00
d961ea19cc Add automatic reminder for doc_config.nu (#15984)
Inspired by https://github.com/nushell/nushell/pull/15979 a small Github
actions bot that detects when you make a change to the `nu-protocol`
bits of the config and reminds to consider making a change to the
Nushell version in `doc_config.nu` as well.
2025-06-16 23:51:15 +02:00
3db9c81958 Search nested structures recursively in find command (#15850)
# Description

Instead of converting nested structures into strings and
pattern-matching the strings, the `find` command will recursively search
the nested structures for matches.

- fixes #15618 

# User-Facing Changes

Text in nested structures will now be highlighted as well.

Error values will always passed on instead of testing them against the
search term

There will be slight changes in match behavior, such as characters that
are part of the string representations of data structures no longer
matching all nested data structures.
2025-06-16 15:29:41 -05:00
55240d98a5 Update config nu --doc to represent OSC 7 and 9;9 better (#15979)
- fixes #15975

# Description

This changes the `config nu --doc` output for OSC 7 and 9;9 to represent
better what happens on Windows machines.

This is the current behavior internally:

5be8717fe8/crates/nu-protocol/src/config/shell_integration.rs (L18-L27)

And with this PR the `config nu --doc` better reflects that behavior,
thanks to @fdncred for that idea.

# User-Facing Changes

None

# Tests + Formatting


- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting


---------

Co-authored-by: Bahex <Bahex@users.noreply.github.com>
2025-06-16 21:42:07 +02:00
fda181d566 Adjust std-rfc/clip deprecation window (#15981)
Follow-up to #15877. That PR was created before 0.105, but merged after
it was released. This PR adjusts the deprecation window from
0.105.0-0.107.0 to 0.106.0-0.108.0
2025-06-16 21:40:37 +02:00
2e484156e0 Use internal find.rs code for help --find (#15982)
# Description

Currently, `help --find` uses it's own code for looking for the keyword
in a string and highlighting it. This code duplicates a lot of the logic
found in the code of `find`.

This commit re-uses the code of `find` in `help` commands instead.

# User-Facing Changes

This should not affect the behavior of `help`.
2025-06-16 21:29:32 +02:00
52604f8b00 fix(std/log): Don't assume env variables are set (#15980)
# Description
Commands in `std/log` assume the `export-env` has been run and the
relevant environment variables are set.
However, when modules/libraries import `std/log` without defining their
own `export-env` block to run `std/log`'s, logging commands will fail at
runtime.

While it's on the author of the modules to include `export-env { use
std/log [] }` in their modules, this is a very simple issue to solve and
would make the user experience smoother.

# User-Facing Changes
`std/log` work without problem when their env vars are not set.

---------

Co-authored-by: Bahex <17417311+Bahex@users.noreply.github.com>
Co-authored-by: 132ikl <132@ikl.sh>
2025-06-16 12:31:17 -04:00
2fed1f5967 Update Nu for release and nightly workflow (#15969)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

1. Upgrade Nushell to 0.105.1 for release and nightly workflow
2. Use `hustcer/setup-nu` Action for `windows-11-arm` runner to simplify
the workflow

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

None

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

Looks fine here:
https://github.com/hustcer/nushell/actions/runs/15657383788/job/44110173668#step:7:1357
2025-06-15 22:25:18 +08:00
5be8717fe8 Add full feature as an alternative to --all-features (#15971)
- closes #15967 

# Description


In 0.105 we introduced the feature `rustls-tls` which is enabled by
default and uses `rustls` instead of `openssl` on linux machines. Since
both `native-tls` and `rustls-tls` cannot be enabled at the same did
this break the `--all-features` flag. To provide an easy alternative, I
introduced the `full` feature here.

# User-Facing Changes

Instead of `cargo install nu --all-features`, you now can do `cargo
install nu --features full`.

# Tests + Formatting


No new tests, this is just a feature collection.
2025-06-15 13:49:58 +02:00
091d14f085 Fix table --expand case with wrapping of emojie (#15948)
close #15940
2025-06-14 18:39:39 -05:00
4c19242c0d feat(gstat): add state entry (like "Clean", "Merge", "Rebase", etc.) (#15965)
# Description

This PR adds a new `state` key to the output of `gstat` that shows the
current repo state state. Like "Clean", "Merge", "Rebase", etc. The full
list of possible values can be seen
[here](https://docs.rs/git2/latest/git2/enum.RepositoryState.html).

This information is somewhat useful when shown in prompt. Not often
needed, but sometimes really useful.

# User-Facing Changes

New key added to `gstat` output. I don't think it should cause issues to
`gstat` users.

# Tests + Formatting

I couldn't find any tests for `nu_plugin_gstat`.

# After Submitting

I couldn't find any documentation about the output of `gstat`, so I
don't think there is anything to be done here either.
2025-06-14 07:50:29 -05:00
3df0177ba5 feat: use get request by default, post if payload (#15862)
Hello, this PR resolves the second request of the issue
https://github.com/nushell/nushell/issues/10957, which involves using a
default verb based on the request. If a URL is provided, the command
will default to GET, and if data is provided, it will default to POST.
This means that the following pairs of commands are equivalent:

```
http --content-type application/json http://localhost:8000 {a:1}
http post --content-type application/json http://localhost:8000 {a:1}
```
```
http http://localhost:8000 "username"
http post http://localhost:8000 "username"
```
```
http http://localhost:8000
http get http://localhost:8000
```

The `http` command now accepts all flags of the `post` and `get`
commands. It will still display the help message if no subcommand is
provided, and the description has been updated accordingly. The logic in
the `http` command is minimal to delegate error management
responsibilities to the specific `run_get` and `run_post` functions.
2025-06-14 15:22:37 +08:00
f7888fce83 fix stor insert/delete collision (#15838)
# Description

Based on some testing in
[Discord](https://discord.com/channels/601130461678272522/1349836000804995196/1353138803640111135)
we were able to find that `insert` and `delete` happening at the same
time caused problems in the `stor` command. So, I added `conn.is_busy()`
with a sleep to try and avoid that problem.


![image](https://github.com/user-attachments/assets/e01bccab-0aaa-40ab-b0bf-25e3c72aa037)

/cc @NotTheDr01ds @132ikl 

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2025-06-13 16:29:07 -05:00
cf1a53143c feat(ansi): use _ in short name and rst -> reset (#15907)
# Description
I've noticed that unlike everything else in nushell the output of `ansi
--list` has a column named `short name` instead of `short_name`, so I
changed it. While I was at it, I also added a shortname `rst` to `reset`
since it is often used.

# User-Facing Changes
Changed the column name of `ansi --list` from `short name` to
`short_name`
2025-06-13 16:24:40 -05:00
28a94048c5 feat(format number): add --no-prefix flag (#15960)
# Description
I have added a `--no-prefix` flag to the `format number` command to not
include the `0b`, `0x` and `0o` prefixes in the output. Also, I've
changed the order in which the formats are displayed to one I thinks
makes it easier to read, with the upper and lower alternatives next to
each other.


![image](https://github.com/user-attachments/assets/cd50631d-1b27-40d4-84d9-f2ac125586d4)

# User-Facing Changes
The formatting of floats previously did not include prefixes while
integers did. Now prefixes are on by default for both, while including
the new flag removes them. Changing the order of the record shouldn't
have any effect on previous code.

# Tests + Formatting
I have added an additional example that test this behavior.

# After Submitting

---------

Co-authored-by: Darren Schroeder <343840+fdncred@users.noreply.github.com>
2025-06-13 16:13:26 -05:00
fb691c0da5 Allow polars schema --datatype-list to be used without pipeline input (#15964)
# Description
Fixes the issue of listing allowed datatypes when not being used with
dataframe pipeline input.

Co-authored-by: Jack Wright <jack.wright@nike.com>
2025-06-13 12:38:50 -07:00
7972aea530 Make polars last consistent with polars first (#15963)
# Description
`polars last` will only return one row by default making it consistent
with `polars first`

# User-Facing Changes
- `polars last` will only return one row by default making it consistent
with `polars first`

Co-authored-by: Jack Wright <jack.wright@nike.com>
2025-06-13 12:35:26 -07:00
aa710eeb9a Add groupby support for polars last (#15953)
# Description
Allows `polars last` to be used with group-by
```nu
> ❯ : [[a b c d]; [1 0.5 true Apple] [2 0.5 true Orange] [2 4 true Apple] [3 10 false Apple] [4 13 false Banana] [5 14 true Banana]] | polars into-df -s {a: u8, b: f32, c: bool, d: str} | polars group-by d | polars last | polars sort-by [a] | polars collect
╭───┬────────┬───┬───────┬───────╮
│ # │   d    │ a │   b   │   c   │
├───┼────────┼───┼───────┼───────┤
│ 0 │ Orange │ 2 │  0.50 │ true  │
│ 1 │ Apple  │ 3 │ 10.00 │ false │
│ 2 │ Banana │ 5 │ 14.00 │ true  │
╰───┴────────┴───┴───────┴───────╯
```

# User-Facing Changes
- `polars last` can now be used with group-by expressions

Co-authored-by: Jack Wright <jack.wright@nike.com>
2025-06-13 12:10:29 -07:00
91e843a6d4 add like, not-like to help operators (#15959)
# Description

This PR adds `like` and `not-like` to the `help operators` command. Now
it at least lists them. I wasn't sure if I should say `=~ or like` so I
just separated them with a comma.

![image](https://github.com/user-attachments/assets/1165d900-80a2-4633-9b75-109fcb617c75)


# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2025-06-13 08:59:37 -05:00
ebcb26f9d5 Promote clip from std-rfc to std (#15877)
# Description
Promotes the clip module from `std-rfc` to `std`. Whether we want to
promote other modules as well (probably?) is up for discussion but I
thought I would get the ball rolling with this one.

# User-Facing Changes
* The `clip` module has been promoted from `std-rfc` to `std`. Using the
`std-rfc` version of clip modules will give a deprecation warning
instructing you to switch to the `std` version.

# Tests + Formatting
N/A

# After Submitting
N/A
2025-06-13 07:26:48 +08:00
f8b0af70ff Don't make unquoted file/dir paths absolute (#15878)
# Description

Closes #15848. Currently, we expand unquoted strings to absolute paths
if they are of type `path` or `directory`. This PR makes this no longer
happen. `~`, `.`, and `..+` are still expanded, but a path like
`.../foo/bar/..` will only be turned into `../../foo`, rather than a
full absolute path.

This is mostly so that paths don't get modified before being sent to
known external commands (as in the linked issue). But also, it seems
unnecessary to make all unquoted paths absolute.

After feedback from @132ikl, this PR also makes it so that unquoted
paths are expanded at parse time, so that it matches the runtime
behavior. Previously, `path` expressions were turned into strings
verbatim, while `directory` expressions were treated as not being const.

API change: `nu_path::expansions::expand_path` is now exposed as
`nu_path::expand_path`.

# User-Facing Changes

This has the potential to silently break a lot of scripts. For example,
if someone has a command that expects an already-expanded absolute path,
changes the current working directory, and then passes the path
somewhere, they will now need to use `path expand` to expand the path
themselves before changing the current working directory.

# Tests + Formatting

Just added one test to make sure unquoted `path` arguments aren't made
absolute.

# After Submitting

This is a breaking change, so will need to be mentioned in the release
notes.
2025-06-13 07:26:01 +08:00
12465193a4 cli: Use latest specified flag value when repeated (#15919)
# Description

This PR makes the last specified CLI arguments take precedence over the
earlier ones.

Existing command line tools that align with the new behaviour include:
- `neovim`: `nvim -u a.lua -u b.lua` will use `b.lua`
- `ripgrep`: you can have `--smart-case` in your user config but
override it later with `--case-sensitive` or `--ignore-case` (not
exactly the same flag override as the one I'm talking about but I think
it's still a valid example of latter flags taking precedence over the
first ones)

I think a flag defined last can be considered an override. This allows
having a `nu` alias that includes some default config (`alias nu="nu
--config something.nu"`) but being able to override that default config
as if using `nu` normally.
 
## Example

```sh
nu --config config1.nu --config config2.nu -c '$nu.config-path'
```
The current behavior would print `config1.nu`, and the new one would
print `config2.nu`

## Implementation

Just `.rev()` the iterator to search for arguments starting from the end
of the list. To support that I had to modify the return type of
`named_iter` (I couldn't find a more generic way than
`DoubleEndedIterator`).

# User-Facing Changes

- Users passing repeated flags and relying in nushell using the first
value will experience breakage. Given that right now there's no point in
passing a flag multiple times I guess not many users will be affected

# Tests + Formatting

I added a test that checks the new behavior with `--config` and
`--env-config`. I'm happy to add more cases if needed

# After Submitting
2025-06-13 07:23:38 +08:00
bd3930d00d Better error on spawn failure caused by null bytes (#15911)
# Description

When attempting to pass a null byte in a commandline argument, Nu
currently fails with:

```
> ^echo (char -i 0)
Error: nu:🐚:io::invalid_input

  × I/O error
  ╰─▶   × Could not spawn foreground child

   ╭────
 1 │ crates/nu-command/src/system/run_external.rs:284:17
   · ─────────────────────────┬─────────────────────────
   ·                          ╰── Invalid input parameter
   ╰────
```

This does not explain which input parameter is invalid, or why. Since Nu
does not typically seem to escape null bytes when printing values
containing them, this can make it a bit tricky to track down the
problem.

After this change, it fails with:

```
> ^echo (char -i 0)
Error: nu:🐚:io::invalid_input

  × I/O error
  ╰─▶   × Could not spawn foreground child: nul byte found in provided data

   ╭────
 1 │ crates/nu-command/src/system/run_external.rs:282:17
   · ─────────────────────────┬─────────────────────────
   ·                          ╰── Invalid input parameter
   ╰────

```

which is more useful. This could be improved further but this is niche
enough that is probably not necessary.

This might make some other errors unnecessarily verbose but seems like
the better default. I did check that attempting to execute a
non-executable file still has a reasonable error: the error message for
that failure is not affected by this change.

It is still an "internal" error (referencing the Nu code triggering it,
not the user's input) because the `call.head` span available to this
code is for the command, not its arguments. Using it would result in

```
  × I/O error
  ╰─▶   × Could not spawn foreground child: nul byte found in provided data

   ╭─[entry #1:1:2]
 1 │ ^echo (char -i 0)
   ·  ──┬─
   ·    ╰── Invalid input parameter
   ╰────
```

which is actively misleading because "echo" does not contain the nul
byte.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

Haven't tried to write a test yet: it's tricky because the better error
message comes from the Rust stdlib (so a straightforward integration
test checking for the specific message would be brittle)...

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2025-06-13 07:22:37 +08:00
81e86c40e1 Try to make hide-env respects overlays (#15904)
# Description
Closes: #15755
I think it's a good feature, to achieve this, we need to get all hidden
envs(it's defined in `get_hidden_env_vars`, and then restore these envs
back to stack)

# User-Facing Changes
### Before
```nushell
> $env.foo = 'bar'
> overlay new xxx
> hide-env foo
> overlay hide xxx
> $env.foo
Error: nu:🐚:column_not_found

  × Cannot find column 'foo'
   ╭─[entry #21:5:1]
 4 │ overlay hide xxx
 5 │ $env.foo
   · ────┬───┬
   ·     │   ╰── value originates here
   ·     ╰── cannot find column 'foo'
   ╰────
```

### After
```nushell
> $env.foo = 'bar'
> overlay new xxx
> hide-env foo
> overlay hide xxx
> $env.foo
bar
```

## Note
But it doesn't work if it runs the example code in script:
`nu -c "$env.foo = 'bar'; overlay new xxx; hide-env foo; overlay hide
xxx; $env.foo"`
still raises an error says `foo` doesn't found. That's because if we run
the script at once, the envs in stack doesn't have a chance to merge
back into `engine_state`, which is only called in `repl`.

It introduces some sort of inconsistency, but I think users use overlays
mostly in repl, so it's good to have such feature first.

# Tests + Formatting
Added 2 tests

# After Submitting
NaN
2025-06-13 07:22:23 +08:00
2fe25d6299 nu-table: (table -e) Reuse NuRecordsValue::width in some cases (#15902)
Just remove a few calculations of width for values which will be
inserted anyhow.
So it must be just a bit faster (in base case).
2025-06-13 07:22:10 +08:00
4aeede2dd5 nu-table: Remove safety-net width check (#15901)
I think we must be relatively confident to say at the check point we
build correct table.
There must be no point endlessly recheck stuff.
2025-06-13 07:22:02 +08:00
0e46ef9769 build(deps): bump which from 7.0.0 to 7.0.3 (#15937)
Bumps [which](https://github.com/harryfei/which-rs) from 7.0.0 to 7.0.3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/harryfei/which-rs/releases">which's
releases</a>.</em></p>
<blockquote>
<h2>7.0.3</h2>
<ul>
<li>Update rustix to version 1.0. Congrats to rustix on this milestone,
and thanks <a href="https://github.com/mhils"><code>@​mhils</code></a>
for this contribution to which!</li>
</ul>
<h2>7.0.2</h2>
<ul>
<li>Don't return paths containing the single dot <code>.</code>
reference to the current directory, even if the original request was
given in terms of the current directory. Thanks <a
href="https://github.com/jakobhellermann"><code>@​jakobhellermann</code></a>
for this contribution!</li>
</ul>
<h2>7.0.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Switch to <code>env_home</code> crate by <a
href="https://github.com/micolous"><code>@​micolous</code></a> in <a
href="https://redirect.github.com/harryfei/which-rs/pull/105">harryfei/which-rs#105</a></li>
<li>fixes <a
href="https://redirect.github.com/harryfei/which-rs/issues/106">#106</a>,
bump patch version by <a
href="https://github.com/Xaeroxe"><code>@​Xaeroxe</code></a> in <a
href="https://redirect.github.com/harryfei/which-rs/pull/107">harryfei/which-rs#107</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/micolous"><code>@​micolous</code></a>
made their first contribution in <a
href="https://redirect.github.com/harryfei/which-rs/pull/105">harryfei/which-rs#105</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/harryfei/which-rs/compare/7.0.0...7.0.1">https://github.com/harryfei/which-rs/compare/7.0.0...7.0.1</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/harryfei/which-rs/blob/master/CHANGELOG.md">which's
changelog</a>.</em></p>
<blockquote>
<h2>7.0.3</h2>
<ul>
<li>Update rustix to version 1.0. Congrats to rustix on this milestone,
and thanks <a href="https://github.com/mhils"><code>@​mhils</code></a>
for this contribution to which!</li>
</ul>
<h2>7.0.2</h2>
<ul>
<li>Don't return paths containing the single dot <code>.</code>
reference to the current directory, even if the original request was
given in
terms of the current directory. Thanks <a
href="https://github.com/jakobhellermann"><code>@​jakobhellermann</code></a>
for this contribution!</li>
</ul>
<h2>7.0.1</h2>
<ul>
<li>Get user home directory from <code>env_home</code> instead of
<code>home</code>. Thanks <a
href="https://github.com/micolous"><code>@​micolous</code></a> for this
contribution!</li>
<li>If home directory is unavailable, do not expand the tilde to an
empty string. Leave it as is.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1d145deef8"><code>1d145de</code></a>
release version 7.0.3</li>
<li><a
href="f5e5292234"><code>f5e5292</code></a>
fix unrelated lint error</li>
<li><a
href="4dcefa6fe9"><code>4dcefa6</code></a>
bump rustix</li>
<li><a
href="bd868818bd"><code>bd86881</code></a>
bump version, add to changelog</li>
<li><a
href="cf37760ea1"><code>cf37760</code></a>
don't run relative dot test on macos</li>
<li><a
href="f2c4bd6e8b"><code>f2c4bd6</code></a>
update target to new name for wasm32-wasip1</li>
<li><a
href="87acc088c1"><code>87acc08</code></a>
When searching for <code>./script.sh</code>, don't return
<code>/path/to/./script.sh</code></li>
<li><a
href="68acf2c456"><code>68acf2c</code></a>
Fix changelog to link to GitHub profile</li>
<li><a
href="b6754b2a56"><code>b6754b2</code></a>
Update CHANGELOG.md</li>
<li><a
href="0c63719129"><code>0c63719</code></a>
fixes <a
href="https://redirect.github.com/harryfei/which-rs/issues/106">#106</a>,
bump patch version</li>
<li>Additional commits viewable in <a
href="https://github.com/harryfei/which-rs/compare/7.0.0...7.0.3">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=which&package-manager=cargo&previous-version=7.0.0&new-version=7.0.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-13 07:21:37 +08:00
962467fdfd build(deps): bump titlecase from 3.5.0 to 3.6.0 (#15936)
Bumps [titlecase](https://github.com/wezm/titlecase) from 3.5.0 to
3.6.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/wezm/titlecase/releases">titlecase's
releases</a>.</em></p>
<blockquote>
<h2>Version 3.6.0</h2>
<ul>
<li>Support hyphenated words by <a
href="https://github.com/carlocorradini"><code>@​carlocorradini</code></a>
in <a
href="https://redirect.github.com/wezm/titlecase/pull/37">#37</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/wezm/titlecase/compare/v3.5.0...v3.6.0">https://github.com/wezm/titlecase/compare/v3.5.0...v3.6.0</a></p>
<h2>Binaries</h2>
<ul>
<li><a
href="https://releases.wezm.net/titlecase/v3.6.0/titlecase-v3.6.0-amd64-unknown-freebsd.tar.gz">FreeBSD
13+ amd64</a></li>
<li><a
href="https://releases.wezm.net/titlecase/v3.6.0/titlecase-v3.6.0-x86_64-unknown-linux-musl.tar.gz">Linux
x86_64</a></li>
<li><a
href="https://releases.wezm.net/titlecase/v3.6.0/titlecase-v3.6.0-universal-apple-darwin.tar.gz">MacOS
Universal</a></li>
<li><a
href="https://releases.wezm.net/titlecase/v3.6.0/titlecase-v3.6.0-x86_64-pc-windows-msvc.zip">Windows
x86_64</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/wezm/titlecase/blob/master/Changelog.md">titlecase's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/wezm/titlecase/releases/tag/v3.6.0">3.6.0</a></h2>
<ul>
<li>Support hypendated words <a
href="https://redirect.github.com/wezm/titlecase/pull/37">#37</a>.
Thanks <a
href="https://github.com/carlocorradini"><code>@​carlocorradini</code></a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="83265b43ba"><code>83265b4</code></a>
Version 3.6.0</li>
<li><a
href="e49b32b262"><code>e49b32b</code></a>
Use 'contains' to check for internal characters</li>
<li><a
href="736be39991"><code>736be39</code></a>
feat: hyphen</li>
<li>See full diff in <a
href="https://github.com/wezm/titlecase/compare/v3.5.0...v3.6.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=titlecase&package-manager=cargo&previous-version=3.5.0&new-version=3.6.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-13 07:21:09 +08:00
d27232df6e build(deps): bump ansi-str from 0.8.0 to 0.9.0 (#15935)
Bumps [ansi-str](https://github.com/zhiburt/ansi-str) from 0.8.0 to
0.9.0.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/zhiburt/ansi-str/commits">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=ansi-str&package-manager=cargo&previous-version=0.8.0&new-version=0.9.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-13 07:19:47 +08:00
d7cec2088a Fix docs typo referring to non-existant Value::CustomValue (#15954)
# Description

I was messing around with custom types and noticed `nu-protocol`
referring to a `Value::CustomValue` variant that doesn't exist. Fixed it
to say `Value::Custom` instead.

# User-Facing Changes

Documentation mentions the correct variant of `Value`

# Tests + Formatting

No new tests necessary

# After Submitting
2025-06-12 13:34:52 -05:00
22d1fdcdf6 Improve precision in parsing of filesize values (#15950)
- improves rounding error reported in #15851
- my ~~discussion~~ monologue on how filesizes are parsed currently:
#15944

# Description

The issue linked above reported rounding errors when converting MiB to
GiB, which is mainly caused by parsing of the literal.

Nushell tries to convert all filesize values to bytes, but currently
does so in 2 steps:
- first converting it to the next smaller unit in `nu-parser` (so `MiB`
to `KiB`, in this case), and truncating to an `i64` here
- then converting that to bytes in `nu-engine`, again truncating to
`i64`

In the specific example above (`95307.27MiB`), this causes 419 bytes of
rounding error. By instead directly converting to bytes while parsing,
the value is accurate (truncating those 0.52 bytes, or 4.12 bits).
Rounding error in the conversion to GiB is also multiple magnitudes
lower.

(Note that I haven't thoroughly tested this, so I can't say with
confidence that all values would be parsed accurate to the byte.)

# User-Facing Changes

More accurate filesize values, and lower accumulated rounding error in
calculations.

# Tests + Formatting

new test: `parse_filesize` in `nu-parser` - verifies that `95307.27MiB`
is parsed correctly as `99_936_915_947B`

# After Submitting
2025-06-12 07:58:21 -05:00
ba59f71f20 bump to dev version 0.105.2 (#15952)
# Description

Bump nushell to development version.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2025-06-12 07:57:01 -05:00
2352548467 Use NUSHELL_PAT for winget publish (#15934)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
Use NUSHELL_PAT for winget publish
2025-06-11 06:48:54 +08:00
3efbda63b8 Try to fix winget publish error (#15933)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

Try to fix winget publish error here:
https://github.com/nushell/nushell/actions/runs/15567694761/job/43835718254#step:2:208
2025-06-11 05:27:27 +08:00
364 changed files with 5741 additions and 2773 deletions

View File

@ -0,0 +1,25 @@
name: Comment on changes to the config
on:
pull_request_target:
paths:
- 'crates/nu-protocol/src/config/**'
jobs:
comment:
runs-on: ubuntu-latest
steps:
- name: Check if there is already a bot comment
uses: peter-evans/find-comment@v3
id: fc
with:
issue-number: ${{ github.event.pull_request.number }}
comment-author: 'github-actions[bot]'
body-includes: Hey, just a bot checking in!
- name: Create comment if there is not
if: steps.fc.outputs.comment-id == ''
uses: peter-evans/create-or-update-comment@v4
with:
issue-number: ${{ github.event.pull_request.number }}
body: |
Hey, just a bot checking in! You edited files related to the configuration.
If you changed any of the default values or added a new config option, don't forget to update the [`doc_config.nu`](https://github.com/nushell/nushell/blob/main/crates/nu-utils/src/default_files/doc_config.nu) which documents the options for our users including the defaults provided by the Rust implementation.
If you didn't make a change here, you can just ignore me.

View File

@ -46,7 +46,7 @@ jobs:
uses: hustcer/setup-nu@v3 uses: hustcer/setup-nu@v3
if: github.repository == 'nushell/nightly' if: github.repository == 'nushell/nightly'
with: with:
version: 0.103.0 version: 0.105.1
# Synchronize the main branch of nightly repo with the main branch of Nushell official repo # Synchronize the main branch of nightly repo with the main branch of Nushell official repo
- name: Prepare for Nightly Release - name: Prepare for Nightly Release
@ -127,6 +127,7 @@ jobs:
- armv7-unknown-linux-musleabihf - armv7-unknown-linux-musleabihf
- riscv64gc-unknown-linux-gnu - riscv64gc-unknown-linux-gnu
- loongarch64-unknown-linux-gnu - loongarch64-unknown-linux-gnu
- loongarch64-unknown-linux-musl
include: include:
- target: aarch64-apple-darwin - target: aarch64-apple-darwin
os: macos-latest os: macos-latest
@ -152,6 +153,8 @@ jobs:
os: ubuntu-22.04 os: ubuntu-22.04
- target: loongarch64-unknown-linux-gnu - target: loongarch64-unknown-linux-gnu
os: ubuntu-22.04 os: ubuntu-22.04
- target: loongarch64-unknown-linux-musl
os: ubuntu-22.04
runs-on: ${{matrix.os}} runs-on: ${{matrix.os}}
steps: steps:
@ -179,36 +182,22 @@ jobs:
uses: actions-rust-lang/setup-rust-toolchain@v1 uses: actions-rust-lang/setup-rust-toolchain@v1
# WARN: Keep the rustflags to prevent from the winget submission error: `CAQuietExec: Error 0xc0000135` # WARN: Keep the rustflags to prevent from the winget submission error: `CAQuietExec: Error 0xc0000135`
with: with:
cache: false
rustflags: '' rustflags: ''
- name: Setup Nushell - name: Setup Nushell
uses: hustcer/setup-nu@v3 uses: hustcer/setup-nu@v3
if: ${{ matrix.os != 'windows-11-arm' }}
with: with:
version: 0.103.0 version: 0.105.1
- name: Release Nu Binary - name: Release Nu Binary
id: nu id: nu
if: ${{ matrix.os != 'windows-11-arm' }}
run: nu .github/workflows/release-pkg.nu run: nu .github/workflows/release-pkg.nu
env: env:
OS: ${{ matrix.os }} OS: ${{ matrix.os }}
REF: ${{ github.ref }} REF: ${{ github.ref }}
TARGET: ${{ matrix.target }} TARGET: ${{ matrix.target }}
- name: Build Nu for Windows ARM64
id: nu0
shell: pwsh
if: ${{ matrix.os == 'windows-11-arm' }}
run: |
$env:OS = 'windows'
$env:REF = '${{ github.ref }}'
$env:TARGET = '${{ matrix.target }}'
cargo build --release --all --target aarch64-pc-windows-msvc
cp ./target/${{ matrix.target }}/release/nu.exe .
./nu.exe -c 'version'
./nu.exe ${{github.workspace}}/.github/workflows/release-pkg.nu
- name: Create an Issue for Release Failure - name: Create an Issue for Release Failure
if: ${{ failure() }} if: ${{ failure() }}
uses: JasonEtco/create-an-issue@v2 uses: JasonEtco/create-an-issue@v2
@ -228,9 +217,7 @@ jobs:
prerelease: true prerelease: true
files: | files: |
${{ steps.nu.outputs.msi }} ${{ steps.nu.outputs.msi }}
${{ steps.nu0.outputs.msi }}
${{ steps.nu.outputs.archive }} ${{ steps.nu.outputs.archive }}
${{ steps.nu0.outputs.archive }}
tag_name: ${{ needs.prepare.outputs.nightly_tag }} tag_name: ${{ needs.prepare.outputs.nightly_tag }}
name: ${{ needs.prepare.outputs.build_date }}-${{ needs.prepare.outputs.nightly_tag }} name: ${{ needs.prepare.outputs.build_date }}-${{ needs.prepare.outputs.nightly_tag }}
env: env:
@ -276,7 +263,7 @@ jobs:
- name: Setup Nushell - name: Setup Nushell
uses: hustcer/setup-nu@v3 uses: hustcer/setup-nu@v3
with: with:
version: 0.103.0 version: 0.105.1
# Keep the last a few releases # Keep the last a few releases
- name: Delete Older Releases - name: Delete Older Releases

View File

@ -58,7 +58,7 @@ jobs:
- name: Setup Nushell - name: Setup Nushell
uses: hustcer/setup-nu@v3 uses: hustcer/setup-nu@v3
with: with:
version: nightly version: 0.105.1
- name: Release MSI Packages - name: Release MSI Packages
id: nu id: nu

View File

@ -99,6 +99,14 @@ if $os in ['macos-latest'] or $USE_UBUNTU {
$env.CARGO_TARGET_LOONGARCH64_UNKNOWN_LINUX_GNU_LINKER = 'loongarch64-unknown-linux-gnu-gcc' $env.CARGO_TARGET_LOONGARCH64_UNKNOWN_LINUX_GNU_LINKER = 'loongarch64-unknown-linux-gnu-gcc'
cargo-build-nu cargo-build-nu
} }
'loongarch64-unknown-linux-musl' => {
print $"(ansi g)Downloading LoongArch64 musl cross-compilation toolchain...(ansi reset)"
aria2c -q https://github.com/LoongsonLab/oscomp-toolchains-for-oskernel/releases/download/loongarch64-linux-musl-cross-gcc-13.2.0/loongarch64-linux-musl-cross.tgz
tar -xf loongarch64-linux-musl-cross.tgz
$env.PATH = ($env.PATH | split row (char esep) | prepend $'($env.PWD)/loongarch64-linux-musl-cross/bin')
$env.CARGO_TARGET_LOONGARCH64_UNKNOWN_LINUX_MUSL_LINKER = "loongarch64-linux-musl-gcc"
cargo-build-nu
}
_ => { _ => {
# musl-tools to fix 'Failed to find tool. Is `musl-gcc` installed?' # musl-tools to fix 'Failed to find tool. Is `musl-gcc` installed?'
# Actually just for x86_64-unknown-linux-musl target # Actually just for x86_64-unknown-linux-musl target

View File

@ -35,6 +35,7 @@ jobs:
- armv7-unknown-linux-musleabihf - armv7-unknown-linux-musleabihf
- riscv64gc-unknown-linux-gnu - riscv64gc-unknown-linux-gnu
- loongarch64-unknown-linux-gnu - loongarch64-unknown-linux-gnu
- loongarch64-unknown-linux-musl
include: include:
- target: aarch64-apple-darwin - target: aarch64-apple-darwin
os: macos-latest os: macos-latest
@ -60,6 +61,8 @@ jobs:
os: ubuntu-22.04 os: ubuntu-22.04
- target: loongarch64-unknown-linux-gnu - target: loongarch64-unknown-linux-gnu
os: ubuntu-22.04 os: ubuntu-22.04
- target: loongarch64-unknown-linux-musl
os: ubuntu-22.04
runs-on: ${{matrix.os}} runs-on: ${{matrix.os}}
@ -90,32 +93,17 @@ jobs:
- name: Setup Nushell - name: Setup Nushell
uses: hustcer/setup-nu@v3 uses: hustcer/setup-nu@v3
if: ${{ matrix.os != 'windows-11-arm' }}
with: with:
version: 0.103.0 version: 0.105.1
- name: Release Nu Binary - name: Release Nu Binary
id: nu id: nu
if: ${{ matrix.os != 'windows-11-arm' }}
run: nu .github/workflows/release-pkg.nu run: nu .github/workflows/release-pkg.nu
env: env:
OS: ${{ matrix.os }} OS: ${{ matrix.os }}
REF: ${{ github.ref }} REF: ${{ github.ref }}
TARGET: ${{ matrix.target }} TARGET: ${{ matrix.target }}
- name: Build Nu for Windows ARM64
id: nu0
shell: pwsh
if: ${{ matrix.os == 'windows-11-arm' }}
run: |
$env:OS = 'windows'
$env:REF = '${{ github.ref }}'
$env:TARGET = '${{ matrix.target }}'
cargo build --release --all --target aarch64-pc-windows-msvc
cp ./target/${{ matrix.target }}/release/nu.exe .
./nu.exe -c 'version'
./nu.exe ${{github.workspace}}/.github/workflows/release-pkg.nu
# WARN: Don't upgrade this action due to the release per asset issue. # WARN: Don't upgrade this action due to the release per asset issue.
# See: https://github.com/softprops/action-gh-release/issues/445 # See: https://github.com/softprops/action-gh-release/issues/445
- name: Publish Archive - name: Publish Archive
@ -125,9 +113,7 @@ jobs:
draft: true draft: true
files: | files: |
${{ steps.nu.outputs.msi }} ${{ steps.nu.outputs.msi }}
${{ steps.nu0.outputs.msi }}
${{ steps.nu.outputs.archive }} ${{ steps.nu.outputs.archive }}
${{ steps.nu0.outputs.archive }}
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@ -10,4 +10,4 @@ jobs:
uses: actions/checkout@v4.1.7 uses: actions/checkout@v4.1.7
- name: Check spelling - name: Check spelling
uses: crate-ci/typos@v1.33.1 uses: crate-ci/typos@v1.34.0

View File

@ -10,6 +10,11 @@ on:
required: true required: true
type: string type: string
permissions:
contents: write
packages: write
pull-requests: write
jobs: jobs:
winget: winget:
@ -25,5 +30,5 @@ jobs:
installers-regex: 'msvc\.msi$' installers-regex: 'msvc\.msi$'
version: ${{ inputs.tag_name || github.event.release.tag_name }} version: ${{ inputs.tag_name || github.event.release.tag_name }}
release-tag: ${{ inputs.tag_name || github.event.release.tag_name }} release-tag: ${{ inputs.tag_name || github.event.release.tag_name }}
token: ${{ secrets.GITHUB_TOKEN }} token: ${{ secrets.NUSHELL_PAT }}
fork-user: nushell fork-user: nushell

6
.gitignore vendored
View File

@ -32,11 +32,17 @@ unstable_cargo_features.txt
# Helix configuration folder # Helix configuration folder
.helix/* .helix/*
.helix .helix
wix/bin/
wix/obj/
wix/nu/
# Coverage tools # Coverage tools
lcov.info lcov.info
tarpaulin-report.html tarpaulin-report.html
# benchmarking
/tango
# Visual Studio # Visual Studio
.vs/* .vs/*
*.rsproj *.rsproj

680
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -10,8 +10,8 @@ homepage = "https://www.nushell.sh"
license = "MIT" license = "MIT"
name = "nu" name = "nu"
repository = "https://github.com/nushell/nushell" repository = "https://github.com/nushell/nushell"
rust-version = "1.85.1" rust-version = "1.86.0"
version = "0.105.1" version = "0.106.1"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
@ -24,36 +24,37 @@ pkg-fmt = "zip"
[workspace] [workspace]
members = [ members = [
"crates/nu_plugin_custom_values",
"crates/nu_plugin_example",
"crates/nu_plugin_formats",
"crates/nu_plugin_gstat",
"crates/nu_plugin_inc",
"crates/nu_plugin_polars",
"crates/nu_plugin_query",
"crates/nu_plugin_stress_internals",
"crates/nu-cli", "crates/nu-cli",
"crates/nu-engine",
"crates/nu-parser",
"crates/nu-system",
"crates/nu-cmd-base", "crates/nu-cmd-base",
"crates/nu-cmd-extra", "crates/nu-cmd-extra",
"crates/nu-cmd-lang", "crates/nu-cmd-lang",
"crates/nu-cmd-plugin", "crates/nu-cmd-plugin",
"crates/nu-command",
"crates/nu-color-config", "crates/nu-color-config",
"crates/nu-command",
"crates/nu-derive-value",
"crates/nu-engine",
"crates/nu-experimental",
"crates/nu-explore", "crates/nu-explore",
"crates/nu-json", "crates/nu-json",
"crates/nu-lsp", "crates/nu-lsp",
"crates/nu-pretty-hex", "crates/nu-parser",
"crates/nu-protocol",
"crates/nu-derive-value",
"crates/nu-plugin",
"crates/nu-plugin-core", "crates/nu-plugin-core",
"crates/nu-plugin-engine", "crates/nu-plugin-engine",
"crates/nu-plugin-protocol", "crates/nu-plugin-protocol",
"crates/nu-plugin-test-support", "crates/nu-plugin-test-support",
"crates/nu_plugin_inc", "crates/nu-plugin",
"crates/nu_plugin_gstat", "crates/nu-pretty-hex",
"crates/nu_plugin_example", "crates/nu-protocol",
"crates/nu_plugin_query",
"crates/nu_plugin_custom_values",
"crates/nu_plugin_formats",
"crates/nu_plugin_polars",
"crates/nu_plugin_stress_internals",
"crates/nu-std", "crates/nu-std",
"crates/nu-system",
"crates/nu-table", "crates/nu-table",
"crates/nu-term-grid", "crates/nu-term-grid",
"crates/nu-test-support", "crates/nu-test-support",
@ -63,7 +64,7 @@ members = [
[workspace.dependencies] [workspace.dependencies]
alphanumeric-sort = "1.5" alphanumeric-sort = "1.5"
ansi-str = "0.8" ansi-str = "0.9"
anyhow = "1.0.82" anyhow = "1.0.82"
base64 = "0.22.1" base64 = "0.22.1"
bracoxide = "0.1.6" bracoxide = "0.1.6"
@ -71,7 +72,7 @@ brotli = "7.0"
byteorder = "1.5" byteorder = "1.5"
bytes = "1" bytes = "1"
bytesize = "1.3.3" bytesize = "1.3.3"
calamine = "0.26" calamine = "0.28"
chardetng = "0.1.17" chardetng = "0.1.17"
chrono = { default-features = false, version = "0.4.34" } chrono = { default-features = false, version = "0.4.34" }
chrono-humanize = "0.2.3" chrono-humanize = "0.2.3"
@ -92,7 +93,7 @@ filesize = "0.2"
filetime = "0.2" filetime = "0.2"
heck = "0.5.0" heck = "0.5.0"
human-date-parser = "0.3.0" human-date-parser = "0.3.0"
indexmap = "2.9" indexmap = "2.10"
indicatif = "0.17" indicatif = "0.17"
interprocess = "2.2.0" interprocess = "2.2.0"
is_executable = "1.0" is_executable = "1.0"
@ -131,7 +132,7 @@ proc-macro-error2 = "2.0"
proc-macro2 = "1.0" proc-macro2 = "1.0"
procfs = "0.17.0" procfs = "0.17.0"
pwd = "1.3" pwd = "1.3"
quick-xml = "0.37.0" quick-xml = "0.37.5"
quickcheck = "1.0" quickcheck = "1.0"
quickcheck_macros = "1.1" quickcheck_macros = "1.1"
quote = "1.0" quote = "1.0"
@ -140,7 +141,7 @@ getrandom = "0.2" # pick same version that rand requires
rand_chacha = "0.9" rand_chacha = "0.9"
ratatui = "0.29" ratatui = "0.29"
rayon = "1.10" rayon = "1.10"
reedline = "0.40.0" reedline = "0.41.0"
rmp = "0.8" rmp = "0.8"
rmp-serde = "1.3" rmp-serde = "1.3"
roxmltree = "0.20" roxmltree = "0.20"
@ -156,17 +157,18 @@ serde_json = "1.0.97"
serde_urlencoded = "0.7.1" serde_urlencoded = "0.7.1"
serde_yaml = "0.9.33" serde_yaml = "0.9.33"
sha2 = "0.10" sha2 = "0.10"
strip-ansi-escapes = "0.2.0" strip-ansi-escapes = "0.2.1"
strum = "0.26" strum = "0.26"
strum_macros = "0.26" strum_macros = "0.26"
syn = "2.0" syn = "2.0"
sysinfo = "0.33" sysinfo = "0.36"
tabled = { version = "0.20", default-features = false } tabled = { version = "0.20", default-features = false }
tempfile = "3.20" tempfile = "3.20"
titlecase = "3.5" thiserror = "2.0.12"
titlecase = "3.6"
toml = "0.8" toml = "0.8"
trash = "5.2" trash = "5.2"
update-informer = { version = "1.2.0", default-features = false, features = ["github", "ureq"] } update-informer = { version = "1.3.0", default-features = false, features = ["github", "ureq"] }
umask = "2.1" umask = "2.1"
unicode-segmentation = "1.12" unicode-segmentation = "1.12"
unicode-width = "0.2" unicode-width = "0.2"
@ -184,7 +186,7 @@ uuid = "1.16.0"
v_htmlescape = "0.15.0" v_htmlescape = "0.15.0"
wax = "0.6" wax = "0.6"
web-time = "1.1.0" web-time = "1.1.0"
which = "7.0.0" which = "8.0.0"
windows = "0.56" windows = "0.56"
windows-sys = "0.48" windows-sys = "0.48"
winreg = "0.52" winreg = "0.52"
@ -195,27 +197,29 @@ webpki-roots = "1.0"
# Warning: workspace lints affect library code as well as tests, so don't enable lints that would be too noisy in tests like that. # Warning: workspace lints affect library code as well as tests, so don't enable lints that would be too noisy in tests like that.
# todo = "warn" # todo = "warn"
unchecked_duration_subtraction = "warn" unchecked_duration_subtraction = "warn"
used_underscore_binding = "warn"
[lints] [lints]
workspace = true workspace = true
[dependencies] [dependencies]
nu-cli = { path = "./crates/nu-cli", version = "0.105.1" } nu-cli = { path = "./crates/nu-cli", version = "0.106.1" }
nu-cmd-base = { path = "./crates/nu-cmd-base", version = "0.105.1" } nu-cmd-base = { path = "./crates/nu-cmd-base", version = "0.106.1" }
nu-cmd-lang = { path = "./crates/nu-cmd-lang", version = "0.105.1" } nu-cmd-extra = { path = "./crates/nu-cmd-extra", version = "0.106.1" }
nu-cmd-plugin = { path = "./crates/nu-cmd-plugin", version = "0.105.1", optional = true } nu-cmd-lang = { path = "./crates/nu-cmd-lang", version = "0.106.1" }
nu-cmd-extra = { path = "./crates/nu-cmd-extra", version = "0.105.1" } nu-cmd-plugin = { path = "./crates/nu-cmd-plugin", version = "0.106.1", optional = true }
nu-command = { path = "./crates/nu-command", version = "0.105.1", default-features = false, features = ["os"] } nu-command = { path = "./crates/nu-command", version = "0.106.1", default-features = false, features = ["os"] }
nu-engine = { path = "./crates/nu-engine", version = "0.105.1" } nu-engine = { path = "./crates/nu-engine", version = "0.106.1" }
nu-explore = { path = "./crates/nu-explore", version = "0.105.1" } nu-experimental = { path = "./crates/nu-experimental", version = "0.106.1" }
nu-lsp = { path = "./crates/nu-lsp/", version = "0.105.1" } nu-explore = { path = "./crates/nu-explore", version = "0.106.1" }
nu-parser = { path = "./crates/nu-parser", version = "0.105.1" } nu-lsp = { path = "./crates/nu-lsp/", version = "0.106.1" }
nu-path = { path = "./crates/nu-path", version = "0.105.1" } nu-parser = { path = "./crates/nu-parser", version = "0.106.1" }
nu-plugin-engine = { path = "./crates/nu-plugin-engine", optional = true, version = "0.105.1" } nu-path = { path = "./crates/nu-path", version = "0.106.1" }
nu-protocol = { path = "./crates/nu-protocol", version = "0.105.1" } nu-plugin-engine = { path = "./crates/nu-plugin-engine", optional = true, version = "0.106.1" }
nu-std = { path = "./crates/nu-std", version = "0.105.1" } nu-protocol = { path = "./crates/nu-protocol", version = "0.106.1" }
nu-system = { path = "./crates/nu-system", version = "0.105.1" } nu-std = { path = "./crates/nu-std", version = "0.106.1" }
nu-utils = { path = "./crates/nu-utils", version = "0.105.1" } nu-system = { path = "./crates/nu-system", version = "0.106.1" }
nu-utils = { path = "./crates/nu-utils", version = "0.106.1" }
reedline = { workspace = true, features = ["bashisms", "sqlite"] } reedline = { workspace = true, features = ["bashisms", "sqlite"] }
crossterm = { workspace = true } crossterm = { workspace = true }
@ -244,9 +248,9 @@ nix = { workspace = true, default-features = false, features = [
] } ] }
[dev-dependencies] [dev-dependencies]
nu-test-support = { path = "./crates/nu-test-support", version = "0.105.1" } nu-test-support = { path = "./crates/nu-test-support", version = "0.106.1" }
nu-plugin-protocol = { path = "./crates/nu-plugin-protocol", version = "0.105.1" } nu-plugin-protocol = { path = "./crates/nu-plugin-protocol", version = "0.106.1" }
nu-plugin-core = { path = "./crates/nu-plugin-core", version = "0.105.1" } nu-plugin-core = { path = "./crates/nu-plugin-core", version = "0.106.1" }
assert_cmd = "2.0" assert_cmd = "2.0"
dirs = { workspace = true } dirs = { workspace = true }
tango-bench = "0.6" tango-bench = "0.6"
@ -257,10 +261,14 @@ serial_test = "3.2"
tempfile = { workspace = true } tempfile = { workspace = true }
[features] [features]
# Enable all features while still avoiding mutually exclusive features.
# Use this if `--all-features` fails.
full = ["plugin", "rustls-tls", "system-clipboard", "trash-support", "sqlite"]
plugin = [ plugin = [
# crates # crates
"nu-cmd-plugin", "dep:nu-cmd-plugin",
"nu-plugin-engine", "dep:nu-plugin-engine",
# features # features
"nu-cli/plugin", "nu-cli/plugin",
@ -286,21 +294,20 @@ stable = ["default"]
# Enable to statically link OpenSSL (perl is required, to build OpenSSL https://docs.rs/openssl/latest/openssl/); # Enable to statically link OpenSSL (perl is required, to build OpenSSL https://docs.rs/openssl/latest/openssl/);
# otherwise the system version will be used. Not enabled by default because it takes a while to build # otherwise the system version will be used. Not enabled by default because it takes a while to build
static-link-openssl = ["dep:openssl", "nu-cmd-lang/static-link-openssl"] static-link-openssl = ["dep:openssl"]
# Optional system clipboard support in `reedline`, this behavior has problematic compatibility with some systems. # Optional system clipboard support in `reedline`, this behavior has problematic compatibility with some systems.
# Missing X server/ Wayland can cause issues # Missing X server/ Wayland can cause issues
system-clipboard = [ system-clipboard = [
"reedline/system_clipboard", "reedline/system_clipboard",
"nu-cli/system-clipboard", "nu-cli/system-clipboard",
"nu-cmd-lang/system-clipboard",
] ]
# Stable (Default) # Stable (Default)
trash-support = ["nu-command/trash-support", "nu-cmd-lang/trash-support"] trash-support = ["nu-command/trash-support"]
# SQLite commands for nushell # SQLite commands for nushell
sqlite = ["nu-command/sqlite", "nu-cmd-lang/sqlite", "nu-std/sqlite"] sqlite = ["nu-command/sqlite", "nu-std/sqlite"]
[profile.release] [profile.release]
opt-level = "s" # Optimize for size opt-level = "s" # Optimize for size

View File

@ -199,7 +199,7 @@ fn bench_record_nested_access(n: usize) -> impl IntoBenchmarks {
let nested_access = ".col".repeat(n); let nested_access = ".col".repeat(n);
bench_command( bench_command(
format!("record_nested_access_{n}"), format!("record_nested_access_{n}"),
format!("$record{} | ignore", nested_access), format!("$record{nested_access} | ignore"),
stack, stack,
engine, engine,
) )
@ -319,7 +319,7 @@ fn bench_eval_par_each(n: usize) -> impl IntoBenchmarks {
let stack = Stack::new(); let stack = Stack::new();
bench_command( bench_command(
format!("eval_par_each_{n}"), format!("eval_par_each_{n}"),
format!("(1..{}) | par-each -t 2 {{|_| 1 }} | ignore", n), format!("(1..{n}) | par-each -t 2 {{|_| 1 }} | ignore"),
stack, stack,
engine, engine,
) )
@ -357,7 +357,7 @@ fn encode_json(row_cnt: usize, col_cnt: usize) -> impl IntoBenchmarks {
let encoder = Rc::new(EncodingType::try_from_bytes(b"json").unwrap()); let encoder = Rc::new(EncodingType::try_from_bytes(b"json").unwrap());
[benchmark_fn( [benchmark_fn(
format!("encode_json_{}_{}", row_cnt, col_cnt), format!("encode_json_{row_cnt}_{col_cnt}"),
move |b| { move |b| {
let encoder = encoder.clone(); let encoder = encoder.clone();
let test_data = test_data.clone(); let test_data = test_data.clone();
@ -377,7 +377,7 @@ fn encode_msgpack(row_cnt: usize, col_cnt: usize) -> impl IntoBenchmarks {
let encoder = Rc::new(EncodingType::try_from_bytes(b"msgpack").unwrap()); let encoder = Rc::new(EncodingType::try_from_bytes(b"msgpack").unwrap());
[benchmark_fn( [benchmark_fn(
format!("encode_msgpack_{}_{}", row_cnt, col_cnt), format!("encode_msgpack_{row_cnt}_{col_cnt}"),
move |b| { move |b| {
let encoder = encoder.clone(); let encoder = encoder.clone();
let test_data = test_data.clone(); let test_data = test_data.clone();
@ -399,7 +399,7 @@ fn decode_json(row_cnt: usize, col_cnt: usize) -> impl IntoBenchmarks {
encoder.encode(&test_data, &mut res).unwrap(); encoder.encode(&test_data, &mut res).unwrap();
[benchmark_fn( [benchmark_fn(
format!("decode_json_{}_{}", row_cnt, col_cnt), format!("decode_json_{row_cnt}_{col_cnt}"),
move |b| { move |b| {
let res = res.clone(); let res = res.clone();
b.iter(move || { b.iter(move || {
@ -422,7 +422,7 @@ fn decode_msgpack(row_cnt: usize, col_cnt: usize) -> impl IntoBenchmarks {
encoder.encode(&test_data, &mut res).unwrap(); encoder.encode(&test_data, &mut res).unwrap();
[benchmark_fn( [benchmark_fn(
format!("decode_msgpack_{}_{}", row_cnt, col_cnt), format!("decode_msgpack_{row_cnt}_{col_cnt}"),
move |b| { move |b| {
let res = res.clone(); let res = res.clone();
b.iter(move || { b.iter(move || {

View File

@ -5,29 +5,29 @@ repository = "https://github.com/nushell/nushell/tree/main/crates/nu-cli"
edition = "2024" edition = "2024"
license = "MIT" license = "MIT"
name = "nu-cli" name = "nu-cli"
version = "0.105.1" version = "0.106.1"
[lib] [lib]
bench = false bench = false
[dev-dependencies] [dev-dependencies]
nu-cmd-lang = { path = "../nu-cmd-lang", version = "0.105.1" } nu-cmd-lang = { path = "../nu-cmd-lang", version = "0.106.1" }
nu-command = { path = "../nu-command", version = "0.105.1" } nu-command = { path = "../nu-command", version = "0.106.1" }
nu-std = { path = "../nu-std", version = "0.105.1" } nu-std = { path = "../nu-std", version = "0.106.1" }
nu-test-support = { path = "../nu-test-support", version = "0.105.1" } nu-test-support = { path = "../nu-test-support", version = "0.106.1" }
rstest = { workspace = true, default-features = false } rstest = { workspace = true, default-features = false }
tempfile = { workspace = true } tempfile = { workspace = true }
[dependencies] [dependencies]
nu-cmd-base = { path = "../nu-cmd-base", version = "0.105.1" } nu-cmd-base = { path = "../nu-cmd-base", version = "0.106.1" }
nu-engine = { path = "../nu-engine", version = "0.105.1", features = ["os"] } nu-engine = { path = "../nu-engine", version = "0.106.1", features = ["os"] }
nu-glob = { path = "../nu-glob", version = "0.105.1" } nu-glob = { path = "../nu-glob", version = "0.106.1" }
nu-path = { path = "../nu-path", version = "0.105.1" } nu-path = { path = "../nu-path", version = "0.106.1" }
nu-parser = { path = "../nu-parser", version = "0.105.1" } nu-parser = { path = "../nu-parser", version = "0.106.1" }
nu-plugin-engine = { path = "../nu-plugin-engine", version = "0.105.1", optional = true } nu-plugin-engine = { path = "../nu-plugin-engine", version = "0.106.1", optional = true }
nu-protocol = { path = "../nu-protocol", version = "0.105.1", features = ["os"] } nu-protocol = { path = "../nu-protocol", version = "0.106.1", features = ["os"] }
nu-utils = { path = "../nu-utils", version = "0.105.1" } nu-utils = { path = "../nu-utils", version = "0.106.1" }
nu-color-config = { path = "../nu-color-config", version = "0.105.1" } nu-color-config = { path = "../nu-color-config", version = "0.106.1" }
nu-ansi-term = { workspace = true } nu-ansi-term = { workspace = true }
reedline = { workspace = true, features = ["bashisms", "sqlite"] } reedline = { workspace = true, features = ["bashisms", "sqlite"] }

View File

@ -57,7 +57,7 @@ Note that history item IDs are ignored when importing from file."#
result: None, result: None,
}, },
Example { Example {
example: "[[ command_line cwd ]; [ foo /home ]] | history import", example: "[[ command cwd ]; [ foo /home ]] | history import",
description: "Append `foo` ran from `/home` to the current history", description: "Append `foo` ran from `/home` to the current history",
result: None, result: None,
}, },

View File

@ -118,7 +118,7 @@ fn get_suggestions_by_value(
|| s.chars() || s.chars()
.any(|c: char| !(c.is_ascii_alphabetic() || ['_', '-'].contains(&c))) .any(|c: char| !(c.is_ascii_alphabetic() || ['_', '-'].contains(&c)))
{ {
format!("{:?}", s) format!("{s:?}")
} else { } else {
s s
}; };

View File

@ -52,7 +52,7 @@ impl CommandCompletion {
continue; continue;
}; };
let value = if matched_internal(&name) { let value = if matched_internal(&name) {
format!("^{}", name) format!("^{name}")
} else { } else {
name.clone() name.clone()
}; };

View File

@ -176,7 +176,7 @@ impl NuCompleter {
&mut working_set, &mut working_set,
Some("completer"), Some("completer"),
// Add a placeholder `a` to the end // Add a placeholder `a` to the end
format!("{}a", line).as_bytes(), format!("{line}a").as_bytes(),
false, false,
); );
self.fetch_completions_by_block(block, &working_set, pos, offset, line, true) self.fetch_completions_by_block(block, &working_set, pos, offset, line, true)
@ -370,7 +370,8 @@ impl NuCompleter {
FileCompletion, FileCompletion,
); );
suggestions.extend(self.process_completion(&mut completer, &ctx)); // Prioritize argument completions over (sub)commands
suggestions.splice(0..0, self.process_completion(&mut completer, &ctx));
break; break;
} }
@ -384,33 +385,39 @@ impl NuCompleter {
}; };
self.process_completion(&mut flag_completions, &ctx) self.process_completion(&mut flag_completions, &ctx)
}; };
suggestions.extend(match arg { // Prioritize argument completions over (sub)commands
// flags suggestions.splice(
Argument::Named(_) | Argument::Unknown(_) 0..0,
if prefix.starts_with(b"-") => match arg {
{ // flags
flag_completion_helper() Argument::Named(_) | Argument::Unknown(_)
} if prefix.starts_with(b"-") =>
// only when `strip` == false {
Argument::Positional(_) if prefix == b"-" => flag_completion_helper(), flag_completion_helper()
// complete according to expression type and command head }
Argument::Positional(expr) => { // only when `strip` == false
let command_head = working_set.get_decl(call.decl_id).name(); Argument::Positional(_) if prefix == b"-" => {
positional_arg_indices.push(arg_idx); flag_completion_helper()
self.argument_completion_helper( }
PositionalArguments { // complete according to expression type and command head
command_head, Argument::Positional(expr) => {
positional_arg_indices, let command_head = working_set.get_decl(call.decl_id).name();
arguments: &call.arguments, positional_arg_indices.push(arg_idx);
expr, self.argument_completion_helper(
}, PositionalArguments {
pos, command_head,
&ctx, positional_arg_indices,
suggestions.is_empty(), arguments: &call.arguments,
) expr,
} },
_ => vec![], pos,
}); &ctx,
suggestions.is_empty(),
)
}
_ => vec![],
},
);
break; break;
} else if !matches!(arg, Argument::Named(_)) { } else if !matches!(arg, Argument::Named(_)) {
positional_arg_indices.push(arg_idx); positional_arg_indices.push(arg_idx);
@ -462,10 +469,18 @@ impl NuCompleter {
if let Some(external_result) = if let Some(external_result) =
self.external_completion(closure, &text_spans, offset, new_span) self.external_completion(closure, &text_spans, offset, new_span)
{ {
suggestions.extend(external_result); // Prioritize external results over (sub)commands
suggestions.splice(0..0, external_result);
return suggestions; return suggestions;
} }
} }
// for external path arguments with spaces, please check issue #15790
if suggestions.is_empty() {
let (new_span, prefix) =
strip_placeholder_if_any(working_set, &span, strip);
let ctx = Context::new(working_set, new_span, prefix, offset);
return self.process_completion(&mut FileCompletion, &ctx);
}
break; break;
} }
} }
@ -842,7 +857,7 @@ mod completer_tests {
for (line, has_result, begins_with, expected_values) in dataset { for (line, has_result, begins_with, expected_values) in dataset {
let result = completer.fetch_completions_at(line, line.len()); let result = completer.fetch_completions_at(line, line.len());
// Test whether the result is empty or not // Test whether the result is empty or not
assert_eq!(!result.is_empty(), has_result, "line: {}", line); assert_eq!(!result.is_empty(), has_result, "line: {line}");
// Test whether the result begins with the expected value // Test whether the result begins with the expected value
result result
@ -857,8 +872,7 @@ mod completer_tests {
.filter(|x| *x) .filter(|x| *x)
.count(), .count(),
expected_values.len(), expected_values.len(),
"line: {}", "line: {line}"
line
); );
} }
} }

View File

@ -314,7 +314,7 @@ pub fn escape_path(path: String) -> String {
if path.contains('\'') { if path.contains('\'') {
// decide to use double quotes // decide to use double quotes
// Path as Debug will do the escaping for `"`, `\` // Path as Debug will do the escaping for `"`, `\`
format!("{:?}", path) format!("{path:?}")
} else { } else {
format!("'{path}'") format!("'{path}'")
} }

View File

@ -102,7 +102,11 @@ impl<T> NuMatcher<'_, T> {
options, options,
needle: needle.to_owned(), needle: needle.to_owned(),
state: State::Fuzzy { state: State::Fuzzy {
matcher: Matcher::new(Config::DEFAULT), matcher: Matcher::new({
let mut cfg = Config::DEFAULT;
cfg.prefer_prefix = true;
cfg
}),
atom, atom,
items: Vec::new(), items: Vec::new(),
}, },

View File

@ -129,7 +129,7 @@ impl Completer for DotNuCompletion {
.take_while(|c| "`'\"".contains(*c)) .take_while(|c| "`'\"".contains(*c))
.collect::<String>(); .collect::<String>();
for path in ["std", "std-rfc"] { for path in ["std", "std-rfc"] {
let path = format!("{}{}", surround_prefix, path); let path = format!("{surround_prefix}{path}");
matcher.add( matcher.add(
path.clone(), path.clone(),
FileSuggestion { FileSuggestion {
@ -146,7 +146,7 @@ impl Completer for DotNuCompletion {
for sub_vp_id in sub_paths { for sub_vp_id in sub_paths {
let (path, sub_vp) = working_set.get_virtual_path(*sub_vp_id); let (path, sub_vp) = working_set.get_virtual_path(*sub_vp_id);
let path = path let path = path
.strip_prefix(&format!("{}/", base_dir)) .strip_prefix(&format!("{base_dir}/"))
.unwrap_or(path) .unwrap_or(path)
.to_string(); .to_string();
matcher.add( matcher.add(

View File

@ -3,9 +3,9 @@ use nu_engine::eval_block;
use nu_parser::parse; use nu_parser::parse;
use nu_protocol::{ use nu_protocol::{
PipelineData, ShellError, Spanned, Value, PipelineData, ShellError, Spanned, Value,
cli_error::report_compile_error,
debugger::WithoutDebug, debugger::WithoutDebug,
engine::{EngineState, Stack, StateWorkingSet}, engine::{EngineState, Stack, StateWorkingSet},
report_error::report_compile_error,
report_parse_error, report_parse_warning, report_parse_error, report_parse_warning,
}; };
use std::sync::Arc; use std::sync::Arc;

View File

@ -5,9 +5,9 @@ use nu_parser::parse;
use nu_path::canonicalize_with; use nu_path::canonicalize_with;
use nu_protocol::{ use nu_protocol::{
PipelineData, ShellError, Span, Value, PipelineData, ShellError, Span, Value,
cli_error::report_compile_error,
debugger::WithoutDebug, debugger::WithoutDebug,
engine::{EngineState, Stack, StateWorkingSet}, engine::{EngineState, Stack, StateWorkingSet},
report_error::report_compile_error,
report_parse_error, report_parse_warning, report_parse_error, report_parse_warning,
shell_error::io::*, shell_error::io::*,
}; };

View File

@ -1,4 +1,4 @@
use nu_engine::documentation::{HelpStyle, get_flags_section}; use nu_engine::documentation::{FormatterValue, HelpStyle, get_flags_section};
use nu_protocol::{Config, engine::EngineState, levenshtein_distance}; use nu_protocol::{Config, engine::EngineState, levenshtein_distance};
use nu_utils::IgnoreCaseExt; use nu_utils::IgnoreCaseExt;
use reedline::{Completer, Suggestion}; use reedline::{Completer, Suggestion};
@ -66,8 +66,11 @@ impl NuHelpCompleter {
let _ = write!(long_desc, "Usage:\r\n > {}\r\n", sig.call_signature()); let _ = write!(long_desc, "Usage:\r\n > {}\r\n", sig.call_signature());
if !sig.named.is_empty() { if !sig.named.is_empty() {
long_desc.push_str(&get_flags_section(&sig, &help_style, |v| { long_desc.push_str(&get_flags_section(&sig, &help_style, |v| match v {
v.to_parsable_string(", ", &self.config) FormatterValue::DefaultValue(value) => {
value.to_parsable_string(", ", &self.config)
}
FormatterValue::CodeString(text) => text.to_string(),
})) }))
} }

View File

@ -3,6 +3,8 @@ use std::sync::Arc;
use nu_engine::command_prelude::*; use nu_engine::command_prelude::*;
use reedline::{Highlighter, StyledText}; use reedline::{Highlighter, StyledText};
use crate::syntax_highlight::highlight_syntax;
#[derive(Clone)] #[derive(Clone)]
pub struct NuHighlight; pub struct NuHighlight;
@ -14,6 +16,11 @@ impl Command for NuHighlight {
fn signature(&self) -> Signature { fn signature(&self) -> Signature {
Signature::build("nu-highlight") Signature::build("nu-highlight")
.category(Category::Strings) .category(Category::Strings)
.switch(
"reject-garbage",
"Return an error if invalid syntax (garbage) was encountered",
Some('r'),
)
.input_output_types(vec![(Type::String, Type::String)]) .input_output_types(vec![(Type::String, Type::String)])
} }
@ -32,19 +39,33 @@ impl Command for NuHighlight {
call: &Call, call: &Call,
input: PipelineData, input: PipelineData,
) -> Result<PipelineData, ShellError> { ) -> Result<PipelineData, ShellError> {
let reject_garbage = call.has_flag(engine_state, stack, "reject-garbage")?;
let head = call.head; let head = call.head;
let signals = engine_state.signals(); let signals = engine_state.signals();
let highlighter = crate::NuHighlighter { let engine_state = Arc::new(engine_state.clone());
engine_state: Arc::new(engine_state.clone()), let stack = Arc::new(stack.clone());
stack: Arc::new(stack.clone()),
};
input.map( input.map(
move |x| match x.coerce_into_string() { move |x| match x.coerce_into_string() {
Ok(line) => { Ok(line) => {
let highlights = highlighter.highlight(&line, line.len()); let result = highlight_syntax(&engine_state, &stack, &line, line.len());
let highlights = match (reject_garbage, result.found_garbage) {
(false, _) => result.text,
(true, None) => result.text,
(true, Some(span)) => {
let error = ShellError::OutsideSpannedLabeledError {
src: line,
error: "encountered invalid syntax while highlighting".into(),
msg: "invalid syntax".into(),
span,
};
return Value::error(error, head);
}
};
Value::string(highlights.render_simple(), head) Value::string(highlights.render_simple(), head)
} }
Err(err) => Value::error(err, head), Err(err) => Value::error(err, head),

View File

@ -1172,6 +1172,7 @@ fn edit_from_record(
"cutfromlinestart" => EditCommand::CutFromLineStart, "cutfromlinestart" => EditCommand::CutFromLineStart,
"cuttoend" => EditCommand::CutToEnd, "cuttoend" => EditCommand::CutToEnd,
"cuttolineend" => EditCommand::CutToLineEnd, "cuttolineend" => EditCommand::CutToLineEnd,
"killline" => EditCommand::KillLine,
"cutwordleft" => EditCommand::CutWordLeft, "cutwordleft" => EditCommand::CutWordLeft,
"cutbigwordleft" => EditCommand::CutBigWordLeft, "cutbigwordleft" => EditCommand::CutBigWordLeft,
"cutwordright" => EditCommand::CutWordRight, "cutwordright" => EditCommand::CutWordRight,

View File

@ -22,8 +22,8 @@ use nu_color_config::StyleComputer;
use nu_engine::env_to_strings; use nu_engine::env_to_strings;
use nu_engine::exit::cleanup_exit; use nu_engine::exit::cleanup_exit;
use nu_parser::{lex, parse, trim_quotes_str}; use nu_parser::{lex, parse, trim_quotes_str};
use nu_protocol::shell_error;
use nu_protocol::shell_error::io::IoError; use nu_protocol::shell_error::io::IoError;
use nu_protocol::{BannerKind, shell_error};
use nu_protocol::{ use nu_protocol::{
HistoryConfig, HistoryFileFormat, PipelineData, ShellError, Span, Spanned, Value, HistoryConfig, HistoryFileFormat, PipelineData, ShellError, Span, Spanned, Value,
config::NuCursorShape, config::NuCursorShape,
@ -145,8 +145,8 @@ pub fn evaluate_repl(
if load_std_lib.is_none() { if load_std_lib.is_none() {
match engine_state.get_config().show_banner { match engine_state.get_config().show_banner {
Value::Bool { val: false, .. } => {} BannerKind::None => {}
Value::String { ref val, .. } if val == "short" => { BannerKind::Short => {
eval_source( eval_source(
engine_state, engine_state,
&mut unique_stack, &mut unique_stack,
@ -156,7 +156,7 @@ pub fn evaluate_repl(
false, false,
); );
} }
_ => { BannerKind::Full => {
eval_source( eval_source(
engine_state, engine_state,
&mut unique_stack, &mut unique_stack,
@ -239,7 +239,7 @@ fn escape_special_vscode_bytes(input: &str) -> Result<String, ShellError> {
match byte { match byte {
// Escape bytes below 0x20 // Escape bytes below 0x20
b if b < 0x20 => format!("\\x{:02X}", byte).into_bytes(), b if b < 0x20 => format!("\\x{byte:02X}").into_bytes(),
// Escape semicolon as \x3B // Escape semicolon as \x3B
b';' => "\\x3B".to_string().into_bytes(), b';' => "\\x3B".to_string().into_bytes(),
// Escape backslash as \\ // Escape backslash as \\
@ -1097,8 +1097,7 @@ fn run_shell_integration_osc633(
// If we're in vscode, run their specific ansi escape sequence. // If we're in vscode, run their specific ansi escape sequence.
// This is helpful for ctrl+g to change directories in the terminal. // This is helpful for ctrl+g to change directories in the terminal.
run_ansi_sequence(&format!( run_ansi_sequence(&format!(
"{}{}{}", "{VSCODE_CWD_PROPERTY_MARKER_PREFIX}{path}{VSCODE_CWD_PROPERTY_MARKER_SUFFIX}"
VSCODE_CWD_PROPERTY_MARKER_PREFIX, path, VSCODE_CWD_PROPERTY_MARKER_SUFFIX
)); ));
perf!( perf!(
@ -1114,10 +1113,7 @@ fn run_shell_integration_osc633(
//OSC 633 ; E ; <commandline> [; <nonce] ST - Explicitly set the command line with an optional nonce. //OSC 633 ; E ; <commandline> [; <nonce] ST - Explicitly set the command line with an optional nonce.
run_ansi_sequence(&format!( run_ansi_sequence(&format!(
"{}{}{}", "{VSCODE_COMMANDLINE_MARKER_PREFIX}{replaced_cmd_text}{VSCODE_COMMANDLINE_MARKER_SUFFIX}"
VSCODE_COMMANDLINE_MARKER_PREFIX,
replaced_cmd_text,
VSCODE_COMMANDLINE_MARKER_SUFFIX
)); ));
} }
} }
@ -1493,7 +1489,7 @@ mod test_auto_cd {
// Parse the input. It must be an auto-cd operation. // Parse the input. It must be an auto-cd operation.
let op = parse_operation(input.to_string(), &engine_state, &stack).unwrap(); let op = parse_operation(input.to_string(), &engine_state, &stack).unwrap();
let ReplOperation::AutoCd { cwd, target, span } = op else { let ReplOperation::AutoCd { cwd, target, span } = op else {
panic!("'{}' was not parsed into an auto-cd operation", input) panic!("'{input}' was not parsed into an auto-cd operation")
}; };
// Perform the auto-cd operation. // Perform the auto-cd operation.

View File

@ -17,147 +17,173 @@ pub struct NuHighlighter {
} }
impl Highlighter for NuHighlighter { impl Highlighter for NuHighlighter {
fn highlight(&self, line: &str, _cursor: usize) -> StyledText { fn highlight(&self, line: &str, cursor: usize) -> StyledText {
trace!("highlighting: {}", line); let result = highlight_syntax(&self.engine_state, &self.stack, line, cursor);
result.text
}
}
let config = self.stack.get_config(&self.engine_state); /// Result of a syntax highlight operation
let highlight_resolved_externals = config.highlight_resolved_externals; #[derive(Default)]
let mut working_set = StateWorkingSet::new(&self.engine_state); pub(crate) struct HighlightResult {
let block = parse(&mut working_set, None, line.as_bytes(), false); /// The highlighted text
let (shapes, global_span_offset) = { pub(crate) text: StyledText,
let mut shapes = flatten_block(&working_set, &block); /// The span of any garbage that was highlighted
// Highlighting externals has a config point because of concerns that using which to resolve pub(crate) found_garbage: Option<Span>,
// externals may slow down things too much. }
if highlight_resolved_externals {
for (span, shape) in shapes.iter_mut() {
if *shape == FlatShape::External {
let str_contents =
working_set.get_span_contents(Span::new(span.start, span.end));
let str_word = String::from_utf8_lossy(str_contents).to_string(); pub(crate) fn highlight_syntax(
let paths = env::path_str(&self.engine_state, &self.stack, *span).ok(); engine_state: &EngineState,
#[allow(deprecated)] stack: &Stack,
let res = if let Ok(cwd) = line: &str,
env::current_dir_str(&self.engine_state, &self.stack) cursor: usize,
{ ) -> HighlightResult {
which::which_in(str_word, paths.as_ref(), cwd).ok() trace!("highlighting: {}", line);
} else {
which::which_in_global(str_word, paths.as_ref()) let config = stack.get_config(engine_state);
.ok() let highlight_resolved_externals = config.highlight_resolved_externals;
.and_then(|mut i| i.next()) let mut working_set = StateWorkingSet::new(engine_state);
}; let block = parse(&mut working_set, None, line.as_bytes(), false);
if res.is_some() { let (shapes, global_span_offset) = {
*shape = FlatShape::ExternalResolved; let mut shapes = flatten_block(&working_set, &block);
} // Highlighting externals has a config point because of concerns that using which to resolve
// externals may slow down things too much.
if highlight_resolved_externals {
for (span, shape) in shapes.iter_mut() {
if *shape == FlatShape::External {
let str_contents =
working_set.get_span_contents(Span::new(span.start, span.end));
let str_word = String::from_utf8_lossy(str_contents).to_string();
let paths = env::path_str(engine_state, stack, *span).ok();
let res = if let Ok(cwd) = engine_state.cwd(Some(stack)) {
which::which_in(str_word, paths.as_ref(), cwd).ok()
} else {
which::which_in_global(str_word, paths.as_ref())
.ok()
.and_then(|mut i| i.next())
};
if res.is_some() {
*shape = FlatShape::ExternalResolved;
} }
} }
} }
(shapes, self.engine_state.next_span_start()) }
(shapes, engine_state.next_span_start())
};
let mut result = HighlightResult::default();
let mut last_seen_span = global_span_offset;
let global_cursor_offset = cursor + global_span_offset;
let matching_brackets_pos = find_matching_brackets(
line,
&working_set,
&block,
global_span_offset,
global_cursor_offset,
);
for shape in &shapes {
if shape.0.end <= last_seen_span
|| last_seen_span < global_span_offset
|| shape.0.start < global_span_offset
{
// We've already output something for this span
// so just skip this one
continue;
}
if shape.0.start > last_seen_span {
let gap = line
[(last_seen_span - global_span_offset)..(shape.0.start - global_span_offset)]
.to_string();
result.text.push((Style::new(), gap));
}
let next_token = line
[(shape.0.start - global_span_offset)..(shape.0.end - global_span_offset)]
.to_string();
let mut add_colored_token = |shape: &FlatShape, text: String| {
result
.text
.push((get_shape_color(shape.as_str(), &config), text));
}; };
let mut output = StyledText::default(); match shape.1 {
let mut last_seen_span = global_span_offset; FlatShape::Garbage => {
result.found_garbage.get_or_insert_with(|| {
let global_cursor_offset = _cursor + global_span_offset; Span::new(
let matching_brackets_pos = find_matching_brackets( shape.0.start - global_span_offset,
line, shape.0.end - global_span_offset,
&working_set, )
&block, });
global_span_offset, add_colored_token(&shape.1, next_token)
global_cursor_offset,
);
for shape in &shapes {
if shape.0.end <= last_seen_span
|| last_seen_span < global_span_offset
|| shape.0.start < global_span_offset
{
// We've already output something for this span
// so just skip this one
continue;
} }
if shape.0.start > last_seen_span { FlatShape::Nothing => add_colored_token(&shape.1, next_token),
let gap = line FlatShape::Binary => add_colored_token(&shape.1, next_token),
[(last_seen_span - global_span_offset)..(shape.0.start - global_span_offset)] FlatShape::Bool => add_colored_token(&shape.1, next_token),
.to_string(); FlatShape::Int => add_colored_token(&shape.1, next_token),
output.push((Style::new(), gap)); FlatShape::Float => add_colored_token(&shape.1, next_token),
} FlatShape::Range => add_colored_token(&shape.1, next_token),
let next_token = line FlatShape::InternalCall(_) => add_colored_token(&shape.1, next_token),
[(shape.0.start - global_span_offset)..(shape.0.end - global_span_offset)] FlatShape::External => add_colored_token(&shape.1, next_token),
.to_string(); FlatShape::ExternalArg => add_colored_token(&shape.1, next_token),
FlatShape::ExternalResolved => add_colored_token(&shape.1, next_token),
let mut add_colored_token = |shape: &FlatShape, text: String| { FlatShape::Keyword => add_colored_token(&shape.1, next_token),
output.push((get_shape_color(shape.as_str(), &config), text)); FlatShape::Literal => add_colored_token(&shape.1, next_token),
}; FlatShape::Operator => add_colored_token(&shape.1, next_token),
FlatShape::Signature => add_colored_token(&shape.1, next_token),
match shape.1 { FlatShape::String => add_colored_token(&shape.1, next_token),
FlatShape::Garbage => add_colored_token(&shape.1, next_token), FlatShape::RawString => add_colored_token(&shape.1, next_token),
FlatShape::Nothing => add_colored_token(&shape.1, next_token), FlatShape::StringInterpolation => add_colored_token(&shape.1, next_token),
FlatShape::Binary => add_colored_token(&shape.1, next_token), FlatShape::DateTime => add_colored_token(&shape.1, next_token),
FlatShape::Bool => add_colored_token(&shape.1, next_token), FlatShape::List
FlatShape::Int => add_colored_token(&shape.1, next_token), | FlatShape::Table
FlatShape::Float => add_colored_token(&shape.1, next_token), | FlatShape::Record
FlatShape::Range => add_colored_token(&shape.1, next_token), | FlatShape::Block
FlatShape::InternalCall(_) => add_colored_token(&shape.1, next_token), | FlatShape::Closure => {
FlatShape::External => add_colored_token(&shape.1, next_token), let span = shape.0;
FlatShape::ExternalArg => add_colored_token(&shape.1, next_token), let shape = &shape.1;
FlatShape::ExternalResolved => add_colored_token(&shape.1, next_token), let spans = split_span_by_highlight_positions(
FlatShape::Keyword => add_colored_token(&shape.1, next_token), line,
FlatShape::Literal => add_colored_token(&shape.1, next_token), span,
FlatShape::Operator => add_colored_token(&shape.1, next_token), &matching_brackets_pos,
FlatShape::Signature => add_colored_token(&shape.1, next_token), global_span_offset,
FlatShape::String => add_colored_token(&shape.1, next_token), );
FlatShape::RawString => add_colored_token(&shape.1, next_token), for (part, highlight) in spans {
FlatShape::StringInterpolation => add_colored_token(&shape.1, next_token), let start = part.start - span.start;
FlatShape::DateTime => add_colored_token(&shape.1, next_token), let end = part.end - span.start;
FlatShape::List let text = next_token[start..end].to_string();
| FlatShape::Table let mut style = get_shape_color(shape.as_str(), &config);
| FlatShape::Record if highlight {
| FlatShape::Block style = get_matching_brackets_style(style, &config);
| FlatShape::Closure => {
let span = shape.0;
let shape = &shape.1;
let spans = split_span_by_highlight_positions(
line,
span,
&matching_brackets_pos,
global_span_offset,
);
for (part, highlight) in spans {
let start = part.start - span.start;
let end = part.end - span.start;
let text = next_token[start..end].to_string();
let mut style = get_shape_color(shape.as_str(), &config);
if highlight {
style = get_matching_brackets_style(style, &config);
}
output.push((style, text));
} }
result.text.push((style, text));
} }
FlatShape::Filepath => add_colored_token(&shape.1, next_token),
FlatShape::Directory => add_colored_token(&shape.1, next_token),
FlatShape::GlobInterpolation => add_colored_token(&shape.1, next_token),
FlatShape::GlobPattern => add_colored_token(&shape.1, next_token),
FlatShape::Variable(_) | FlatShape::VarDecl(_) => {
add_colored_token(&shape.1, next_token)
}
FlatShape::Flag => add_colored_token(&shape.1, next_token),
FlatShape::Pipe => add_colored_token(&shape.1, next_token),
FlatShape::Redirection => add_colored_token(&shape.1, next_token),
FlatShape::Custom(..) => add_colored_token(&shape.1, next_token),
FlatShape::MatchPattern => add_colored_token(&shape.1, next_token),
} }
last_seen_span = shape.0.end;
}
let remainder = line[(last_seen_span - global_span_offset)..].to_string(); FlatShape::Filepath => add_colored_token(&shape.1, next_token),
if !remainder.is_empty() { FlatShape::Directory => add_colored_token(&shape.1, next_token),
output.push((Style::new(), remainder)); FlatShape::GlobInterpolation => add_colored_token(&shape.1, next_token),
FlatShape::GlobPattern => add_colored_token(&shape.1, next_token),
FlatShape::Variable(_) | FlatShape::VarDecl(_) => {
add_colored_token(&shape.1, next_token)
}
FlatShape::Flag => add_colored_token(&shape.1, next_token),
FlatShape::Pipe => add_colored_token(&shape.1, next_token),
FlatShape::Redirection => add_colored_token(&shape.1, next_token),
FlatShape::Custom(..) => add_colored_token(&shape.1, next_token),
FlatShape::MatchPattern => add_colored_token(&shape.1, next_token),
} }
last_seen_span = shape.0.end;
output
} }
let remainder = line[(last_seen_span - global_span_offset)..].to_string();
if !remainder.is_empty() {
result.text.push((Style::new(), remainder));
}
result
} }
fn split_span_by_highlight_positions( fn split_span_by_highlight_positions(

View File

@ -5,9 +5,9 @@ use nu_engine::{eval_block, eval_block_with_early_return};
use nu_parser::{Token, TokenContents, lex, parse, unescape_unquote_string}; use nu_parser::{Token, TokenContents, lex, parse, unescape_unquote_string};
use nu_protocol::{ use nu_protocol::{
PipelineData, ShellError, Span, Value, PipelineData, ShellError, Span, Value,
cli_error::report_compile_error,
debugger::WithoutDebug, debugger::WithoutDebug,
engine::{EngineState, Stack, StateWorkingSet}, engine::{EngineState, Stack, StateWorkingSet},
report_error::report_compile_error,
report_parse_error, report_parse_warning, report_shell_error, report_parse_error, report_parse_warning, report_shell_error,
}; };
#[cfg(windows)] #[cfg(windows)]

View File

@ -5,3 +5,9 @@ fn nu_highlight_not_expr() {
let actual = nu!("'not false' | nu-highlight | ansi strip"); let actual = nu!("'not false' | nu-highlight | ansi strip");
assert_eq!(actual.out, "not false"); assert_eq!(actual.out, "not false");
} }
#[test]
fn nu_highlight_where_row_condition() {
let actual = nu!("'ls | where a b 12345(' | nu-highlight | ansi strip");
assert_eq!(actual.out, "ls | where a b 12345(");
}

View File

@ -9,14 +9,16 @@ use std::{
use nu_cli::NuCompleter; use nu_cli::NuCompleter;
use nu_engine::eval_block; use nu_engine::eval_block;
use nu_parser::parse; use nu_parser::parse;
use nu_path::expand_tilde; use nu_path::{AbsolutePathBuf, expand_tilde};
use nu_protocol::{Config, PipelineData, debugger::WithoutDebug, engine::StateWorkingSet}; use nu_protocol::{Config, PipelineData, debugger::WithoutDebug, engine::StateWorkingSet};
use nu_std::load_standard_library; use nu_std::load_standard_library;
use nu_test_support::fs;
use reedline::{Completer, Suggestion}; use reedline::{Completer, Suggestion};
use rstest::{fixture, rstest}; use rstest::{fixture, rstest};
use support::{ use support::{
completions_helpers::{ completions_helpers::{
new_dotnu_engine, new_external_engine, new_partial_engine, new_quote_engine, new_dotnu_engine, new_engine_helper, new_external_engine, new_partial_engine,
new_quote_engine,
}, },
file, folder, match_suggestions, match_suggestions_by_string, new_engine, file, folder, match_suggestions, match_suggestions_by_string, new_engine,
}; };
@ -123,7 +125,7 @@ fn custom_completer_with_options(
global_opts, global_opts,
completions completions
.iter() .iter()
.map(|comp| format!("'{}'", comp)) .map(|comp| format!("'{comp}'"))
.collect::<Vec<_>>() .collect::<Vec<_>>()
.join(", "), .join(", "),
completer_opts, completer_opts,
@ -307,10 +309,10 @@ fn custom_arguments_and_subcommands() {
let suggestions = completer.complete(completion_str, completion_str.len()); let suggestions = completer.complete(completion_str, completion_str.len());
// including both subcommand and directory completions // including both subcommand and directory completions
let expected = [ let expected = [
"foo test bar".into(),
folder("test_a"), folder("test_a"),
file("test_a_symlink"), file("test_a_symlink"),
folder("test_b"), folder("test_b"),
"foo test bar".into(),
]; ];
match_suggestions_by_string(&expected, &suggestions); match_suggestions_by_string(&expected, &suggestions);
} }
@ -328,7 +330,7 @@ fn custom_flags_and_subcommands() {
let completion_str = "foo --test"; let completion_str = "foo --test";
let suggestions = completer.complete(completion_str, completion_str.len()); let suggestions = completer.complete(completion_str, completion_str.len());
// including both flag and directory completions // including both flag and directory completions
let expected: Vec<_> = vec!["foo --test bar", "--test"]; let expected: Vec<_> = vec!["--test", "foo --test bar"];
match_suggestions(&expected, &suggestions); match_suggestions(&expected, &suggestions);
} }
@ -716,6 +718,16 @@ fn external_completer_fallback() {
let expected = [folder("test_a"), file("test_a_symlink"), folder("test_b")]; let expected = [folder("test_a"), file("test_a_symlink"), folder("test_b")];
let suggestions = run_external_completion(block, input); let suggestions = run_external_completion(block, input);
match_suggestions_by_string(&expected, &suggestions); match_suggestions_by_string(&expected, &suggestions);
// issue #15790
let input = "foo `dir with space/`";
let expected = vec!["`dir with space/bar baz`", "`dir with space/foo`"];
let suggestions = run_external_completion_within_pwd(
block,
input,
fs::fixtures().join("external_completions"),
);
match_suggestions(&expected, &suggestions);
} }
/// Fallback to external completions for flags of `sudo` /// Fallback to external completions for flags of `sudo`
@ -1468,11 +1480,12 @@ fn command_watch_with_filecompletion() {
match_suggestions(&expected_paths, &suggestions) match_suggestions(&expected_paths, &suggestions)
} }
#[rstest] #[test]
fn subcommand_completions() { fn subcommand_vs_external_completer() {
let (_, _, mut engine, mut stack) = new_engine(); let (_, _, mut engine, mut stack) = new_engine();
let commands = r#" let commands = r#"
$env.config.completions.algorithm = "fuzzy" $env.config.completions.algorithm = "fuzzy"
$env.config.completions.external.completer = {|spans| ["external"]}
def foo-test-command [] {} def foo-test-command [] {}
def "foo-test-command bar" [] {} def "foo-test-command bar" [] {}
def "foo-test-command aagap bcr" [] {} def "foo-test-command aagap bcr" [] {}
@ -1485,6 +1498,7 @@ fn subcommand_completions() {
let suggestions = subcommand_completer.complete(prefix, prefix.len()); let suggestions = subcommand_completer.complete(prefix, prefix.len());
match_suggestions( match_suggestions(
&vec![ &vec![
"external",
"food bar", "food bar",
"foo-test-command bar", "foo-test-command bar",
"foo-test-command aagap bcr", "foo-test-command aagap bcr",
@ -1494,7 +1508,7 @@ fn subcommand_completions() {
let prefix = "foot bar"; let prefix = "foot bar";
let suggestions = subcommand_completer.complete(prefix, prefix.len()); let suggestions = subcommand_completer.complete(prefix, prefix.len());
match_suggestions(&vec!["foo-test-command bar"], &suggestions); match_suggestions(&vec!["external", "foo-test-command bar"], &suggestions);
} }
#[test] #[test]
@ -2103,11 +2117,15 @@ fn alias_of_another_alias() {
match_suggestions(&expected_paths, &suggestions) match_suggestions(&expected_paths, &suggestions)
} }
fn run_external_completion(completer: &str, input: &str) -> Vec<Suggestion> { fn run_external_completion_within_pwd(
completer: &str,
input: &str,
pwd: AbsolutePathBuf,
) -> Vec<Suggestion> {
let completer = format!("$env.config.completions.external.completer = {completer}"); let completer = format!("$env.config.completions.external.completer = {completer}");
// Create a new engine // Create a new engine
let (_, _, mut engine_state, mut stack) = new_engine(); let (_, _, mut engine_state, mut stack) = new_engine_helper(pwd);
let (block, delta) = { let (block, delta) = {
let mut working_set = StateWorkingSet::new(&engine_state); let mut working_set = StateWorkingSet::new(&engine_state);
let block = parse(&mut working_set, None, completer.as_bytes(), false); let block = parse(&mut working_set, None, completer.as_bytes(), false);
@ -2131,6 +2149,10 @@ fn run_external_completion(completer: &str, input: &str) -> Vec<Suggestion> {
completer.complete(input, input.len()) completer.complete(input, input.len())
} }
fn run_external_completion(completer: &str, input: &str) -> Vec<Suggestion> {
run_external_completion_within_pwd(completer, input, fs::fixtures().join("completions"))
}
#[test] #[test]
fn unknown_command_completion() { fn unknown_command_completion() {
let (_, _, engine, stack) = new_engine(); let (_, _, engine, stack) = new_engine();

View File

@ -5,7 +5,7 @@ edition = "2024"
license = "MIT" license = "MIT"
name = "nu-cmd-base" name = "nu-cmd-base"
repository = "https://github.com/nushell/nushell/tree/main/crates/nu-cmd-base" repository = "https://github.com/nushell/nushell/tree/main/crates/nu-cmd-base"
version = "0.105.1" version = "0.106.1"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
@ -13,10 +13,10 @@ version = "0.105.1"
workspace = true workspace = true
[dependencies] [dependencies]
nu-engine = { path = "../nu-engine", version = "0.105.1", default-features = false } nu-engine = { path = "../nu-engine", version = "0.106.1", default-features = false }
nu-parser = { path = "../nu-parser", version = "0.105.1" } nu-parser = { path = "../nu-parser", version = "0.106.1" }
nu-path = { path = "../nu-path", version = "0.105.1" } nu-path = { path = "../nu-path", version = "0.106.1" }
nu-protocol = { path = "../nu-protocol", version = "0.105.1", default-features = false } nu-protocol = { path = "../nu-protocol", version = "0.106.1", default-features = false }
indexmap = { workspace = true } indexmap = { workspace = true }
miette = { workspace = true } miette = { workspace = true }

View File

@ -1,11 +1,11 @@
use miette::Result; use miette::Result;
use nu_engine::{eval_block, eval_block_with_early_return}; use nu_engine::{eval_block, eval_block_with_early_return, redirect_env};
use nu_parser::parse; use nu_parser::parse;
use nu_protocol::{ use nu_protocol::{
PipelineData, PositionalArg, ShellError, Span, Type, Value, VarId, PipelineData, PositionalArg, ShellError, Span, Type, Value, VarId,
cli_error::{report_parse_error, report_shell_error},
debugger::WithoutDebug, debugger::WithoutDebug,
engine::{Closure, EngineState, Stack, StateWorkingSet}, engine::{Closure, EngineState, Stack, StateWorkingSet},
report_error::{report_parse_error, report_shell_error},
}; };
use std::{collections::HashMap, sync::Arc}; use std::{collections::HashMap, sync::Arc};
@ -325,19 +325,7 @@ fn run_hook(
} }
// If all went fine, preserve the environment of the called block // If all went fine, preserve the environment of the called block
let caller_env_vars = stack.get_env_var_names(engine_state); redirect_env(engine_state, stack, &callee_stack);
// remove env vars that are present in the caller but not in the callee
// (the callee hid them)
for var in caller_env_vars.iter() {
if !callee_stack.has_env_var(engine_state, var) {
stack.remove_env_var(engine_state, var);
}
}
// add new env vars from callee to caller
for (var, value) in callee_stack.get_stack_env_vars() {
stack.add_env_var(var, value);
}
Ok(pipeline_data) Ok(pipeline_data)
} }

View File

@ -5,7 +5,7 @@ edition = "2024"
license = "MIT" license = "MIT"
name = "nu-cmd-extra" name = "nu-cmd-extra"
repository = "https://github.com/nushell/nushell/tree/main/crates/nu-cmd-extra" repository = "https://github.com/nushell/nushell/tree/main/crates/nu-cmd-extra"
version = "0.105.1" version = "0.106.1"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
@ -16,13 +16,13 @@ bench = false
workspace = true workspace = true
[dependencies] [dependencies]
nu-cmd-base = { path = "../nu-cmd-base", version = "0.105.1" } nu-cmd-base = { path = "../nu-cmd-base", version = "0.106.1" }
nu-engine = { path = "../nu-engine", version = "0.105.1", default-features = false } nu-engine = { path = "../nu-engine", version = "0.106.1", default-features = false }
nu-json = { version = "0.105.1", path = "../nu-json" } nu-json = { version = "0.106.1", path = "../nu-json" }
nu-parser = { path = "../nu-parser", version = "0.105.1" } nu-parser = { path = "../nu-parser", version = "0.106.1" }
nu-pretty-hex = { version = "0.105.1", path = "../nu-pretty-hex" } nu-pretty-hex = { version = "0.106.1", path = "../nu-pretty-hex" }
nu-protocol = { path = "../nu-protocol", version = "0.105.1", default-features = false } nu-protocol = { path = "../nu-protocol", version = "0.106.1", default-features = false }
nu-utils = { path = "../nu-utils", version = "0.105.1", default-features = false } nu-utils = { path = "../nu-utils", version = "0.106.1", default-features = false }
# Potential dependencies for extras # Potential dependencies for extras
heck = { workspace = true } heck = { workspace = true }
@ -37,6 +37,6 @@ itertools = { workspace = true }
mime = { workspace = true } mime = { workspace = true }
[dev-dependencies] [dev-dependencies]
nu-cmd-lang = { path = "../nu-cmd-lang", version = "0.105.1" } nu-cmd-lang = { path = "../nu-cmd-lang", version = "0.106.1" }
nu-command = { path = "../nu-command", version = "0.105.1" } nu-command = { path = "../nu-command", version = "0.106.1" }
nu-test-support = { path = "../nu-test-support", version = "0.105.1" } nu-test-support = { path = "../nu-test-support", version = "0.106.1" }

View File

@ -12,7 +12,10 @@ impl Command for UpdateCells {
fn signature(&self) -> Signature { fn signature(&self) -> Signature {
Signature::build("update cells") Signature::build("update cells")
.input_output_types(vec![(Type::table(), Type::table())]) .input_output_types(vec![
(Type::table(), Type::table()),
(Type::record(), Type::record()),
])
.required( .required(
"closure", "closure",
SyntaxShape::Closure(Some(vec![SyntaxShape::Any])), SyntaxShape::Closure(Some(vec![SyntaxShape::Any])),
@ -77,6 +80,15 @@ impl Command for UpdateCells {
"2021-11-18" => Value::test_string(""), "2021-11-18" => Value::test_string(""),
})])), })])),
}, },
Example {
example: r#"{a: 1, b: 2, c: 3} | update cells { $in + 10 }"#,
description: "Update each value in a record.",
result: Some(Value::test_record(record! {
"a" => Value::test_int(11),
"b" => Value::test_int(12),
"c" => Value::test_int(13),
})),
},
] ]
} }
@ -85,7 +97,7 @@ impl Command for UpdateCells {
engine_state: &EngineState, engine_state: &EngineState,
stack: &mut Stack, stack: &mut Stack,
call: &Call, call: &Call,
input: PipelineData, mut input: PipelineData,
) -> Result<PipelineData, ShellError> { ) -> Result<PipelineData, ShellError> {
let head = call.head; let head = call.head;
let closure: Closure = call.req(engine_state, stack, 0)?; let closure: Closure = call.req(engine_state, stack, 0)?;
@ -102,14 +114,51 @@ impl Command for UpdateCells {
let metadata = input.metadata(); let metadata = input.metadata();
Ok(UpdateCellIterator { match input {
iter: input.into_iter(), PipelineData::Value(
closure: ClosureEval::new(engine_state, stack, closure), Value::Record {
columns, ref mut val,
span: head, internal_span,
},
..,
) => {
let val = val.to_mut();
update_record(
val,
&mut ClosureEval::new(engine_state, stack, closure),
internal_span,
columns.as_ref(),
);
Ok(input)
}
_ => Ok(UpdateCellIterator {
iter: input.into_iter(),
closure: ClosureEval::new(engine_state, stack, closure),
columns,
span: head,
}
.into_pipeline_data(head, engine_state.signals().clone())
.set_metadata(metadata)),
}
}
}
fn update_record(
record: &mut Record,
closure: &mut ClosureEval,
span: Span,
cols: Option<&HashSet<String>>,
) {
if let Some(columns) = cols {
for (col, val) in record.iter_mut() {
if columns.contains(col) {
*val = eval_value(closure, span, std::mem::take(val));
}
}
} else {
for (_, val) in record.iter_mut() {
*val = eval_value(closure, span, std::mem::take(val))
} }
.into_pipeline_data(head, engine_state.signals().clone())
.set_metadata(metadata))
} }
} }
@ -128,18 +177,7 @@ impl Iterator for UpdateCellIterator {
let value = if let Value::Record { val, .. } = &mut value { let value = if let Value::Record { val, .. } = &mut value {
let val = val.to_mut(); let val = val.to_mut();
if let Some(columns) = &self.columns { update_record(val, &mut self.closure, self.span, self.columns.as_ref());
for (col, val) in val.iter_mut() {
if columns.contains(col) {
*val = eval_value(&mut self.closure, self.span, std::mem::take(val));
}
}
} else {
for (_, val) in val.iter_mut() {
*val = eval_value(&mut self.closure, self.span, std::mem::take(val))
}
}
value value
} else { } else {
eval_value(&mut self.closure, self.span, value) eval_value(&mut self.closure, self.span, value)

View File

@ -188,7 +188,7 @@ fn get_theme_from_asset_file(
Some(t) => t, Some(t) => t,
None => { None => {
return Err(ShellError::TypeMismatch { return Err(ShellError::TypeMismatch {
err_message: format!("Unknown HTML theme '{}'", theme_name), err_message: format!("Unknown HTML theme '{theme_name}'"),
span: theme_span, span: theme_span,
}); });
} }
@ -774,8 +774,7 @@ mod tests {
for key in required_keys { for key in required_keys {
assert!( assert!(
theme_map.contains_key(key), theme_map.contains_key(key),
"Expected theme to contain key '{}'", "Expected theme to contain key '{key}'"
key
); );
} }
} }
@ -792,15 +791,13 @@ mod tests {
if let Err(err) = result { if let Err(err) = result {
assert!( assert!(
matches!(err, ShellError::TypeMismatch { .. }), matches!(err, ShellError::TypeMismatch { .. }),
"Expected TypeMismatch error, got: {:?}", "Expected TypeMismatch error, got: {err:?}"
err
); );
if let ShellError::TypeMismatch { err_message, span } = err { if let ShellError::TypeMismatch { err_message, span } = err {
assert!( assert!(
err_message.contains("doesnt-exist"), err_message.contains("doesnt-exist"),
"Error message should mention theme name, got: {}", "Error message should mention theme name, got: {err_message}"
err_message
); );
assert_eq!(span.start, 0); assert_eq!(span.start, 0);
assert_eq!(span.end, 13); assert_eq!(span.end, 13);

View File

@ -161,28 +161,28 @@ fn convert_to_smallest_number_type(num: i64, span: Span) -> Value {
let bytes = v.to_ne_bytes(); let bytes = v.to_ne_bytes();
let mut raw_string = "".to_string(); let mut raw_string = "".to_string();
for ch in bytes { for ch in bytes {
raw_string.push_str(&format!("{:08b} ", ch)); raw_string.push_str(&format!("{ch:08b} "));
} }
Value::string(raw_string.trim(), span) Value::string(raw_string.trim(), span)
} else if let Some(v) = num.to_i16() { } else if let Some(v) = num.to_i16() {
let bytes = v.to_ne_bytes(); let bytes = v.to_ne_bytes();
let mut raw_string = "".to_string(); let mut raw_string = "".to_string();
for ch in bytes { for ch in bytes {
raw_string.push_str(&format!("{:08b} ", ch)); raw_string.push_str(&format!("{ch:08b} "));
} }
Value::string(raw_string.trim(), span) Value::string(raw_string.trim(), span)
} else if let Some(v) = num.to_i32() { } else if let Some(v) = num.to_i32() {
let bytes = v.to_ne_bytes(); let bytes = v.to_ne_bytes();
let mut raw_string = "".to_string(); let mut raw_string = "".to_string();
for ch in bytes { for ch in bytes {
raw_string.push_str(&format!("{:08b} ", ch)); raw_string.push_str(&format!("{ch:08b} "));
} }
Value::string(raw_string.trim(), span) Value::string(raw_string.trim(), span)
} else { } else {
let bytes = num.to_ne_bytes(); let bytes = num.to_ne_bytes();
let mut raw_string = "".to_string(); let mut raw_string = "".to_string();
for ch in bytes { for ch in bytes {
raw_string.push_str(&format!("{:08b} ", ch)); raw_string.push_str(&format!("{ch:08b} "));
} }
Value::string(raw_string.trim(), span) Value::string(raw_string.trim(), span)
} }
@ -193,7 +193,7 @@ fn action(input: &Value, _args: &Arguments, span: Span) -> Value {
Value::Binary { val, .. } => { Value::Binary { val, .. } => {
let mut raw_string = "".to_string(); let mut raw_string = "".to_string();
for ch in val { for ch in val {
raw_string.push_str(&format!("{:08b} ", ch)); raw_string.push_str(&format!("{ch:08b} "));
} }
Value::string(raw_string.trim(), span) Value::string(raw_string.trim(), span)
} }
@ -204,7 +204,7 @@ fn action(input: &Value, _args: &Arguments, span: Span) -> Value {
let raw_bytes = val.as_bytes(); let raw_bytes = val.as_bytes();
let mut raw_string = "".to_string(); let mut raw_string = "".to_string();
for ch in raw_bytes { for ch in raw_bytes {
raw_string.push_str(&format!("{:08b} ", ch)); raw_string.push_str(&format!("{ch:08b} "));
} }
Value::string(raw_string.trim(), span) Value::string(raw_string.trim(), span)
} }

View File

@ -16,6 +16,11 @@ impl Command for FormatNumber {
fn signature(&self) -> nu_protocol::Signature { fn signature(&self) -> nu_protocol::Signature {
Signature::build("format number") Signature::build("format number")
.input_output_types(vec![(Type::Number, Type::record())]) .input_output_types(vec![(Type::Number, Type::record())])
.switch(
"no-prefix",
"don't include the binary, hex or octal prefixes",
Some('n'),
)
.category(Category::Conversions) .category(Category::Conversions)
} }
@ -24,20 +29,36 @@ impl Command for FormatNumber {
} }
fn examples(&self) -> Vec<Example> { fn examples(&self) -> Vec<Example> {
vec![Example { vec![
description: "Get a record containing multiple formats for the number 42", Example {
example: "42 | format number", description: "Get a record containing multiple formats for the number 42",
result: Some(Value::test_record(record! { example: "42 | format number",
"binary" => Value::test_string("0b101010"), result: Some(Value::test_record(record! {
"debug" => Value::test_string("42"), "debug" => Value::test_string("42"),
"display" => Value::test_string("42"), "display" => Value::test_string("42"),
"lowerexp" => Value::test_string("4.2e1"), "binary" => Value::test_string("0b101010"),
"lowerhex" => Value::test_string("0x2a"), "lowerexp" => Value::test_string("4.2e1"),
"octal" => Value::test_string("0o52"), "upperexp" => Value::test_string("4.2E1"),
"upperexp" => Value::test_string("4.2E1"), "lowerhex" => Value::test_string("0x2a"),
"upperhex" => Value::test_string("0x2A"), "upperhex" => Value::test_string("0x2A"),
})), "octal" => Value::test_string("0o52"),
}] })),
},
Example {
description: "Format float without prefixes",
example: "3.14 | format number --no-prefix",
result: Some(Value::test_record(record! {
"debug" => Value::test_string("3.14"),
"display" => Value::test_string("3.14"),
"binary" => Value::test_string("100000000001001000111101011100001010001111010111000010100011111"),
"lowerexp" => Value::test_string("3.14e0"),
"upperexp" => Value::test_string("3.14E0"),
"lowerhex" => Value::test_string("40091eb851eb851f"),
"upperhex" => Value::test_string("40091EB851EB851F"),
"octal" => Value::test_string("400110753412172702437"),
})),
},
]
} }
fn run( fn run(
@ -59,14 +80,24 @@ pub(crate) fn format_number(
) -> Result<PipelineData, ShellError> { ) -> Result<PipelineData, ShellError> {
let cell_paths: Vec<CellPath> = call.rest(engine_state, stack, 0)?; let cell_paths: Vec<CellPath> = call.rest(engine_state, stack, 0)?;
let args = CellPathOnlyArgs::from(cell_paths); let args = CellPathOnlyArgs::from(cell_paths);
operate(action, args, input, call.head, engine_state.signals()) if call.has_flag(engine_state, stack, "no-prefix")? {
operate(
action_no_prefix,
args,
input,
call.head,
engine_state.signals(),
)
} else {
operate(action, args, input, call.head, engine_state.signals())
}
} }
fn action(input: &Value, _args: &CellPathOnlyArgs, span: Span) -> Value { fn action(input: &Value, _args: &CellPathOnlyArgs, span: Span) -> Value {
match input { match input {
Value::Float { val, .. } => format_f64(*val, span), Value::Float { val, .. } => format_f64(*val, false, span),
Value::Int { val, .. } => format_i64(*val, span), Value::Int { val, .. } => format_i64(*val, false, span),
Value::Filesize { val, .. } => format_i64(val.get(), span), Value::Filesize { val, .. } => format_i64(val.get(), false, span),
// Propagate errors by explicitly matching them before the final case. // Propagate errors by explicitly matching them before the final case.
Value::Error { .. } => input.clone(), Value::Error { .. } => input.clone(),
other => Value::error( other => Value::error(
@ -81,33 +112,80 @@ fn action(input: &Value, _args: &CellPathOnlyArgs, span: Span) -> Value {
} }
} }
fn format_i64(num: i64, span: Span) -> Value { fn action_no_prefix(input: &Value, _args: &CellPathOnlyArgs, span: Span) -> Value {
match input {
Value::Float { val, .. } => format_f64(*val, true, span),
Value::Int { val, .. } => format_i64(*val, true, span),
Value::Filesize { val, .. } => format_i64(val.get(), true, span),
// Propagate errors by explicitly matching them before the final case.
Value::Error { .. } => input.clone(),
other => Value::error(
ShellError::OnlySupportsThisInputType {
exp_input_type: "float, int, or filesize".into(),
wrong_type: other.get_type().to_string(),
dst_span: span,
src_span: other.span(),
},
span,
),
}
}
fn format_i64(num: i64, no_prefix: bool, span: Span) -> Value {
Value::record( Value::record(
record! { record! {
"binary" => Value::string(format!("{num:#b}"), span),
"debug" => Value::string(format!("{num:#?}"), span), "debug" => Value::string(format!("{num:#?}"), span),
"display" => Value::string(format!("{num}"), span), "display" => Value::string(format!("{num}"), span),
"binary" => Value::string(
if no_prefix { format!("{num:b}") } else { format!("{num:#b}") },
span,
),
"lowerexp" => Value::string(format!("{num:#e}"), span), "lowerexp" => Value::string(format!("{num:#e}"), span),
"lowerhex" => Value::string(format!("{num:#x}"), span),
"octal" => Value::string(format!("{num:#o}"), span),
"upperexp" => Value::string(format!("{num:#E}"), span), "upperexp" => Value::string(format!("{num:#E}"), span),
"upperhex" => Value::string(format!("{num:#X}"), span), "lowerhex" => Value::string(
if no_prefix { format!("{num:x}") } else { format!("{num:#x}") },
span,
),
"upperhex" => Value::string(
if no_prefix { format!("{num:X}") } else { format!("{num:#X}") },
span,
),
"octal" => Value::string(
if no_prefix { format!("{num:o}") } else { format!("{num:#o}") },
span,
)
}, },
span, span,
) )
} }
fn format_f64(num: f64, span: Span) -> Value { fn format_f64(num: f64, no_prefix: bool, span: Span) -> Value {
Value::record( Value::record(
record! { record! {
"binary" => Value::string(format!("{:b}", num.to_bits()), span),
"debug" => Value::string(format!("{num:#?}"), span), "debug" => Value::string(format!("{num:#?}"), span),
"display" => Value::string(format!("{num}"), span), "display" => Value::string(format!("{num}"), span),
"binary" => Value::string(
if no_prefix {
format!("{:b}", num.to_bits())
} else {
format!("{:#b}", num.to_bits())
},
span,
),
"lowerexp" => Value::string(format!("{num:#e}"), span), "lowerexp" => Value::string(format!("{num:#e}"), span),
"lowerhex" => Value::string(format!("{:0x}", num.to_bits()), span),
"octal" => Value::string(format!("{:0o}", num.to_bits()), span),
"upperexp" => Value::string(format!("{num:#E}"), span), "upperexp" => Value::string(format!("{num:#E}"), span),
"upperhex" => Value::string(format!("{:0X}", num.to_bits()), span), "lowerhex" => Value::string(
if no_prefix { format!("{:x}", num.to_bits()) } else { format!("{:#x}", num.to_bits()) },
span,
),
"upperhex" => Value::string(
if no_prefix { format!("{:X}", num.to_bits()) } else { format!("{:#X}", num.to_bits()) },
span,
),
"octal" => Value::string(
if no_prefix { format!("{:o}", num.to_bits()) } else { format!("{:#o}", num.to_bits()) },
span,
)
}, },
span, span,
) )

View File

@ -6,7 +6,7 @@ repository = "https://github.com/nushell/nushell/tree/main/crates/nu-cmd-lang"
edition = "2024" edition = "2024"
license = "MIT" license = "MIT"
name = "nu-cmd-lang" name = "nu-cmd-lang"
version = "0.105.1" version = "0.106.1"
[lib] [lib]
bench = false bench = false
@ -15,17 +15,18 @@ bench = false
workspace = true workspace = true
[dependencies] [dependencies]
nu-engine = { path = "../nu-engine", version = "0.105.1", default-features = false } nu-engine = { path = "../nu-engine", version = "0.106.1", default-features = false }
nu-parser = { path = "../nu-parser", version = "0.105.1" } nu-experimental = { path = "../nu-experimental", version = "0.106.1" }
nu-protocol = { path = "../nu-protocol", version = "0.105.1", default-features = false } nu-parser = { path = "../nu-parser", version = "0.106.1" }
nu-utils = { path = "../nu-utils", version = "0.105.1", default-features = false } nu-protocol = { path = "../nu-protocol", version = "0.106.1", default-features = false }
nu-cmd-base = { path = "../nu-cmd-base", version = "0.105.1" } nu-utils = { path = "../nu-utils", version = "0.106.1", default-features = false }
nu-cmd-base = { path = "../nu-cmd-base", version = "0.106.1" }
itertools = { workspace = true } itertools = { workspace = true }
shadow-rs = { version = "1.1", default-features = false } shadow-rs = { version = "1.2", default-features = false }
[build-dependencies] [build-dependencies]
shadow-rs = { version = "1.1", default-features = false, features = ["build"] } shadow-rs = { version = "1.2", default-features = false, features = ["build"] }
[dev-dependencies] [dev-dependencies]
quickcheck = { workspace = true } quickcheck = { workspace = true }
@ -43,8 +44,3 @@ plugin = [
"nu-protocol/plugin", "nu-protocol/plugin",
"os", "os",
] ]
trash-support = []
sqlite = []
static-link-openssl = []
system-clipboard = []

View File

@ -296,7 +296,7 @@ fn run(
} else { } else {
let value = stream.into_value(); let value = stream.into_value();
let base_description = value.get_type().to_string(); let base_description = value.get_type().to_string();
Value::string(format!("{} (stream)", base_description), head) Value::string(format!("{base_description} (stream)"), head)
} }
} }
PipelineData::Value(value, ..) => { PipelineData::Value(value, ..) => {

View File

@ -264,7 +264,7 @@ fn bind_args_to(
.expect("internal error: all custom parameters must have var_ids"); .expect("internal error: all custom parameters must have var_ids");
if let Some(result) = val_iter.next() { if let Some(result) = val_iter.next() {
let param_type = param.shape.to_type(); let param_type = param.shape.to_type();
if required && !result.is_subtype_of(&param_type) { if !result.is_subtype_of(&param_type) {
return Err(ShellError::CantConvert { return Err(ShellError::CantConvert {
to_type: param.shape.to_type().to_string(), to_type: param.shape.to_type().to_string(),
from_type: result.get_type().to_string(), from_type: result.get_type().to_string(),

View File

@ -229,7 +229,7 @@ fn make_other_error(value: &Value, throw_span: Option<Span>) -> ShellError {
error: "invalid error format.".into(), error: "invalid error format.".into(),
msg: "`$.label.start` should be smaller than `$.label.end`".into(), msg: "`$.label.start` should be smaller than `$.label.end`".into(),
span: Some(label_span), span: Some(label_span),
help: Some(format!("{} > {}", span_start, span_end)), help: Some(format!("{span_start} > {span_end}")),
inner: vec![], inner: vec![],
}; };
} }

View File

@ -60,11 +60,13 @@ impl Command for If {
) -> Result<PipelineData, ShellError> { ) -> Result<PipelineData, ShellError> {
let call = call.assert_ast_call()?; let call = call.assert_ast_call()?;
let cond = call.positional_nth(0).expect("checked through parser"); let cond = call.positional_nth(0).expect("checked through parser");
let then_block = call let then_expr = call.positional_nth(1).expect("checked through parser");
.positional_nth(1) let then_block = then_expr
.expect("checked through parser")
.as_block() .as_block()
.expect("internal error: missing block"); .ok_or_else(|| ShellError::TypeMismatch {
err_message: "expected block".into(),
span: then_expr.span,
})?;
let else_case = call.positional_nth(2); let else_case = call.positional_nth(2);
if eval_constant(working_set, cond)?.as_bool()? { if eval_constant(working_set, cond)?.as_bool()? {

View File

@ -69,5 +69,5 @@ pub use return_::Return;
pub use scope::*; pub use scope::*;
pub use try_::Try; pub use try_::Try;
pub use use_::Use; pub use use_::Use;
pub use version::Version; pub use version::{VERSION_NU_FEATURES, Version};
pub use while_::While; pub use while_::While;

View File

@ -86,12 +86,17 @@ impl Command for OverlayHide {
vec![] vec![]
}; };
// also restore env vars which has been hidden
let env_vars_to_restore = stack.get_hidden_env_vars(&overlay_name.item, engine_state);
stack.remove_overlay(&overlay_name.item); stack.remove_overlay(&overlay_name.item);
for (name, val) in env_vars_to_restore {
stack.add_env_var(name, val);
}
for (name, val) in env_vars_to_keep { for (name, val) in env_vars_to_keep {
stack.add_env_var(name, val); stack.add_env_var(name, val);
} }
stack.update_config(engine_state)?;
Ok(PipelineData::empty()) Ok(PipelineData::empty())
} }

View File

@ -158,7 +158,7 @@ impl Command for OverlayUse {
} }
let eval_block = get_eval_block(engine_state); let eval_block = get_eval_block(engine_state);
let _ = eval_block(engine_state, &mut callee_stack, block, input); let _ = eval_block(engine_state, &mut callee_stack, block, input)?;
// The export-env block should see the env vars *before* activating this overlay // The export-env block should see the env vars *before* activating this overlay
caller_stack.add_overlay(overlay_name); caller_stack.add_overlay(overlay_name);
@ -178,6 +178,7 @@ impl Command for OverlayUse {
} }
} else { } else {
caller_stack.add_overlay(overlay_name); caller_stack.add_overlay(overlay_name);
caller_stack.update_config(engine_state)?;
} }
Ok(PipelineData::empty()) Ok(PipelineData::empty())

View File

@ -1,11 +1,48 @@
use std::sync::OnceLock; use std::{borrow::Cow, sync::OnceLock};
use itertools::Itertools;
use nu_engine::command_prelude::*; use nu_engine::command_prelude::*;
use nu_protocol::engine::StateWorkingSet; use nu_protocol::engine::StateWorkingSet;
use shadow_rs::shadow; use shadow_rs::shadow;
shadow!(build); shadow!(build);
/// Static container for the cargo features used by the `version` command.
///
/// This `OnceLock` holds the features from `nu`.
/// When you build `nu_cmd_lang`, Cargo doesn't pass along the same features that `nu` itself uses.
/// By setting this static before calling `version`, you make it show `nu`'s features instead
/// of `nu_cmd_lang`'s.
///
/// Embedders can set this to any feature list they need, but in most cases you'll probably want to
/// pass the cargo features of your host binary.
///
/// # How to get cargo features in your build script
///
/// In your binary's build script:
/// ```rust,ignore
/// // Re-export CARGO_CFG_FEATURE to the main binary.
/// // It holds all the features that cargo sets for your binary as a comma-separated list.
/// println!(
/// "cargo:rustc-env=NU_FEATURES={}",
/// std::env::var("CARGO_CFG_FEATURE").expect("set by cargo")
/// );
/// ```
///
/// Then, before you call `version`:
/// ```rust,ignore
/// // This uses static strings, but since we're using `Cow`, you can also pass owned strings.
/// let features = env!("NU_FEATURES")
/// .split(',')
/// .map(Cow::Borrowed)
/// .collect();
///
/// nu_cmd_lang::VERSION_NU_FEATURES
/// .set(features)
/// .expect("couldn't set VERSION_NU_FEATURES");
/// ```
pub static VERSION_NU_FEATURES: OnceLock<Vec<Cow<'static, str>>> = OnceLock::new();
#[derive(Clone)] #[derive(Clone)]
pub struct Version; pub struct Version;
@ -113,7 +150,17 @@ pub fn version(engine_state: &EngineState, span: Span) -> Result<PipelineData, S
record.push( record.push(
"features", "features",
Value::string(features_enabled().join(", "), span), Value::string(
VERSION_NU_FEATURES
.get()
.as_ref()
.map(|v| v.as_slice())
.unwrap_or_default()
.iter()
.filter(|f| !f.starts_with("dep:"))
.join(", "),
span,
),
); );
#[cfg(not(feature = "plugin"))] #[cfg(not(feature = "plugin"))]
@ -141,6 +188,17 @@ pub fn version(engine_state: &EngineState, span: Span) -> Result<PipelineData, S
); );
} }
record.push(
"experimental_options",
Value::string(
nu_experimental::ALL
.iter()
.map(|option| format!("{}={}", option.identifier(), option.get()))
.join(", "),
span,
),
);
Ok(Value::record(record, span).into_pipeline_data()) Ok(Value::record(record, span).into_pipeline_data())
} }
@ -164,42 +222,12 @@ fn global_allocator() -> &'static str {
"standard" "standard"
} }
fn features_enabled() -> Vec<String> {
let mut names = vec!["default".to_string()];
// NOTE: There should be another way to know features on.
#[cfg(feature = "trash-support")]
{
names.push("trash".to_string());
}
#[cfg(feature = "sqlite")]
{
names.push("sqlite".to_string());
}
#[cfg(feature = "static-link-openssl")]
{
names.push("static-link-openssl".to_string());
}
#[cfg(feature = "system-clipboard")]
{
names.push("system-clipboard".to_string());
}
names.sort();
names
}
#[cfg(test)] #[cfg(test)]
mod test { mod test {
#[test] #[test]
fn test_examples() { fn test_examples() {
use super::Version; use super::Version;
use crate::test_examples; use crate::test_examples;
test_examples(Version {}) test_examples(Version)
} }
} }

View File

@ -221,23 +221,23 @@ impl std::fmt::Debug for DebuggableValue<'_> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self.0 { match self.0 {
Value::Bool { val, .. } => { Value::Bool { val, .. } => {
write!(f, "{:?}", val) write!(f, "{val:?}")
} }
Value::Int { val, .. } => { Value::Int { val, .. } => {
write!(f, "{:?}", val) write!(f, "{val:?}")
} }
Value::Float { val, .. } => { Value::Float { val, .. } => {
write!(f, "{:?}f", val) write!(f, "{val:?}f")
} }
Value::Filesize { val, .. } => { Value::Filesize { val, .. } => {
write!(f, "Filesize({:?})", val) write!(f, "Filesize({val:?})")
} }
Value::Duration { val, .. } => { Value::Duration { val, .. } => {
let duration = std::time::Duration::from_nanos(*val as u64); let duration = std::time::Duration::from_nanos(*val as u64);
write!(f, "Duration({:?})", duration) write!(f, "Duration({duration:?})")
} }
Value::Date { val, .. } => { Value::Date { val, .. } => {
write!(f, "Date({:?})", val) write!(f, "Date({val:?})")
} }
Value::Range { val, .. } => match **val { Value::Range { val, .. } => match **val {
Range::IntRange(range) => match range.end() { Range::IntRange(range) => match range.end() {
@ -280,7 +280,7 @@ impl std::fmt::Debug for DebuggableValue<'_> {
}, },
}, },
Value::String { val, .. } | Value::Glob { val, .. } => { Value::String { val, .. } | Value::Glob { val, .. } => {
write!(f, "{:?}", val) write!(f, "{val:?}")
} }
Value::Record { val, .. } => { Value::Record { val, .. } => {
write!(f, "{{")?; write!(f, "{{")?;
@ -305,22 +305,22 @@ impl std::fmt::Debug for DebuggableValue<'_> {
write!(f, "]") write!(f, "]")
} }
Value::Closure { val, .. } => { Value::Closure { val, .. } => {
write!(f, "Closure({:?})", val) write!(f, "Closure({val:?})")
} }
Value::Nothing { .. } => { Value::Nothing { .. } => {
write!(f, "Nothing") write!(f, "Nothing")
} }
Value::Error { error, .. } => { Value::Error { error, .. } => {
write!(f, "Error({:?})", error) write!(f, "Error({error:?})")
} }
Value::Binary { val, .. } => { Value::Binary { val, .. } => {
write!(f, "Binary({:?})", val) write!(f, "Binary({val:?})")
} }
Value::CellPath { val, .. } => { Value::CellPath { val, .. } => {
write!(f, "CellPath({:?})", val.to_string()) write!(f, "CellPath({:?})", val.to_string())
} }
Value::Custom { val, .. } => { Value::Custom { val, .. } => {
write!(f, "CustomValue({:?})", val) write!(f, "CustomValue({val:?})")
} }
} }
} }

View File

@ -5,7 +5,7 @@ edition = "2024"
license = "MIT" license = "MIT"
name = "nu-cmd-plugin" name = "nu-cmd-plugin"
repository = "https://github.com/nushell/nushell/tree/main/crates/nu-cmd-plugin" repository = "https://github.com/nushell/nushell/tree/main/crates/nu-cmd-plugin"
version = "0.105.1" version = "0.106.1"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
@ -13,10 +13,10 @@ version = "0.105.1"
workspace = true workspace = true
[dependencies] [dependencies]
nu-engine = { path = "../nu-engine", version = "0.105.1" } nu-engine = { path = "../nu-engine", version = "0.106.1" }
nu-path = { path = "../nu-path", version = "0.105.1" } nu-path = { path = "../nu-path", version = "0.106.1" }
nu-protocol = { path = "../nu-protocol", version = "0.105.1", features = ["plugin"] } nu-protocol = { path = "../nu-protocol", version = "0.106.1", features = ["plugin"] }
nu-plugin-engine = { path = "../nu-plugin-engine", version = "0.105.1" } nu-plugin-engine = { path = "../nu-plugin-engine", version = "0.106.1" }
itertools = { workspace = true } itertools = { workspace = true }

View File

@ -5,7 +5,7 @@ repository = "https://github.com/nushell/nushell/tree/main/crates/nu-color-confi
edition = "2024" edition = "2024"
license = "MIT" license = "MIT"
name = "nu-color-config" name = "nu-color-config"
version = "0.105.1" version = "0.106.1"
[lib] [lib]
bench = false bench = false
@ -14,12 +14,12 @@ bench = false
workspace = true workspace = true
[dependencies] [dependencies]
nu-protocol = { path = "../nu-protocol", version = "0.105.1", default-features = false } nu-protocol = { path = "../nu-protocol", version = "0.106.1", default-features = false }
nu-engine = { path = "../nu-engine", version = "0.105.1", default-features = false } nu-engine = { path = "../nu-engine", version = "0.106.1", default-features = false }
nu-json = { path = "../nu-json", version = "0.105.1" } nu-json = { path = "../nu-json", version = "0.106.1" }
nu-ansi-term = { workspace = true } nu-ansi-term = { workspace = true }
serde = { workspace = true, features = ["derive"] } serde = { workspace = true, features = ["derive"] }
[dev-dependencies] [dev-dependencies]
nu-test-support = { path = "../nu-test-support", version = "0.105.1" } nu-test-support = { path = "../nu-test-support", version = "0.106.1" }

View File

@ -5,7 +5,7 @@ edition = "2024"
license = "MIT" license = "MIT"
name = "nu-command" name = "nu-command"
repository = "https://github.com/nushell/nushell/tree/main/crates/nu-command" repository = "https://github.com/nushell/nushell/tree/main/crates/nu-command"
version = "0.105.1" version = "0.106.1"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
@ -16,21 +16,22 @@ bench = false
workspace = true workspace = true
[dependencies] [dependencies]
nu-cmd-base = { path = "../nu-cmd-base", version = "0.105.1" }
nu-color-config = { path = "../nu-color-config", version = "0.105.1" }
nu-engine = { path = "../nu-engine", version = "0.105.1", default-features = false }
nu-glob = { path = "../nu-glob", version = "0.105.1" }
nu-json = { path = "../nu-json", version = "0.105.1" }
nu-parser = { path = "../nu-parser", version = "0.105.1" }
nu-path = { path = "../nu-path", version = "0.105.1" }
nu-pretty-hex = { path = "../nu-pretty-hex", version = "0.105.1" }
nu-protocol = { path = "../nu-protocol", version = "0.105.1", default-features = false }
nu-system = { path = "../nu-system", version = "0.105.1" }
nu-table = { path = "../nu-table", version = "0.105.1" }
nu-term-grid = { path = "../nu-term-grid", version = "0.105.1" }
nu-utils = { path = "../nu-utils", version = "0.105.1", default-features = false }
nu-ansi-term = { workspace = true } nu-ansi-term = { workspace = true }
nuon = { path = "../nuon", version = "0.105.1" } nu-cmd-base = { path = "../nu-cmd-base", version = "0.106.1" }
nu-color-config = { path = "../nu-color-config", version = "0.106.1" }
nu-engine = { path = "../nu-engine", version = "0.106.1", default-features = false }
nu-experimental = { path = "../nu-experimental", version = "0.106.1" }
nu-glob = { path = "../nu-glob", version = "0.106.1" }
nu-json = { path = "../nu-json", version = "0.106.1" }
nu-parser = { path = "../nu-parser", version = "0.106.1" }
nu-path = { path = "../nu-path", version = "0.106.1" }
nu-pretty-hex = { path = "../nu-pretty-hex", version = "0.106.1" }
nu-protocol = { path = "../nu-protocol", version = "0.106.1", default-features = false }
nu-system = { path = "../nu-system", version = "0.106.1" }
nu-table = { path = "../nu-table", version = "0.106.1" }
nu-term-grid = { path = "../nu-term-grid", version = "0.106.1" }
nu-utils = { path = "../nu-utils", version = "0.106.1", default-features = false }
nuon = { path = "../nuon", version = "0.106.1" }
alphanumeric-sort = { workspace = true } alphanumeric-sort = { workspace = true }
base64 = { workspace = true } base64 = { workspace = true }
@ -226,8 +227,8 @@ sqlite = ["rusqlite"]
trash-support = ["trash"] trash-support = ["trash"]
[dev-dependencies] [dev-dependencies]
nu-cmd-lang = { path = "../nu-cmd-lang", version = "0.105.1" } nu-cmd-lang = { path = "../nu-cmd-lang", version = "0.106.1" }
nu-test-support = { path = "../nu-test-support", version = "0.105.1" } nu-test-support = { path = "../nu-test-support", version = "0.106.1" }
dirs = { workspace = true } dirs = { workspace = true }
mockito = { workspace = true, default-features = false } mockito = { workspace = true, default-features = false }

View File

@ -89,7 +89,7 @@ impl Command for Histogram {
"frequency-column-name can't be {}", "frequency-column-name can't be {}",
forbidden_column_names forbidden_column_names
.iter() .iter()
.map(|val| format!("'{}'", val)) .map(|val| format!("'{val}'"))
.collect::<Vec<_>>() .collect::<Vec<_>>()
.join(", ") .join(", ")
), ),

View File

@ -142,7 +142,7 @@ fn into_binary(
} }
} }
fn action(input: &Value, _args: &Arguments, span: Span) -> Value { fn action(input: &Value, args: &Arguments, span: Span) -> Value {
let value = match input { let value = match input {
Value::Binary { .. } => input.clone(), Value::Binary { .. } => input.clone(),
Value::Int { val, .. } => Value::binary(val.to_ne_bytes().to_vec(), span), Value::Int { val, .. } => Value::binary(val.to_ne_bytes().to_vec(), span),
@ -168,7 +168,7 @@ fn action(input: &Value, _args: &Arguments, span: Span) -> Value {
), ),
}; };
if _args.compact { if args.compact {
let val_span = value.span(); let val_span = value.span();
if let Value::Binary { val, .. } = value { if let Value::Binary { val, .. } = value {
let val = if cfg!(target_endian = "little") { let val = if cfg!(target_endian = "little") {

View File

@ -678,7 +678,7 @@ fn parse_value_from_record_as_u32(
Value::Int { val, .. } => { Value::Int { val, .. } => {
if *val < 0 || *val > u32::MAX as i64 { if *val < 0 || *val > u32::MAX as i64 {
return Err(ShellError::IncorrectValue { return Err(ShellError::IncorrectValue {
msg: format!("incorrect value for {}", col), msg: format!("incorrect value for {col}"),
val_span: *head, val_span: *head,
call_span: *span, call_span: *span,
}); });

View File

@ -368,8 +368,7 @@ fn merge_record(record: &Record, head: Span, span: Span) -> Result<Value, ShellE
if !ALLOWED_SIGNS.contains(&val.as_str()) { if !ALLOWED_SIGNS.contains(&val.as_str()) {
let allowed_signs = ALLOWED_SIGNS.join(", "); let allowed_signs = ALLOWED_SIGNS.join(", ");
return Err(ShellError::IncorrectValue { return Err(ShellError::IncorrectValue {
msg: format!("Invalid sign. Allowed signs are {}", allowed_signs) msg: format!("Invalid sign. Allowed signs are {allowed_signs}").to_string(),
.to_string(),
val_span: sign.span(), val_span: sign.span(),
call_span: head, call_span: head,
}); });

View File

@ -122,8 +122,8 @@ impl Table {
.conn .conn
.query_row(&table_exists_query, [], |row| row.get(0)) .query_row(&table_exists_query, [], |row| row.get(0))
.map_err(|err| ShellError::GenericError { .map_err(|err| ShellError::GenericError {
error: format!("{:#?}", err), error: format!("{err:#?}"),
msg: format!("{:#?}", err), msg: format!("{err:#?}"),
span: None, span: None,
help: None, help: None,
inner: Vec::new(), inner: Vec::new(),
@ -241,7 +241,7 @@ fn insert_in_transaction(
let tx = table.try_init(&first_val)?; let tx = table.try_init(&first_val)?;
for stream_value in stream { for stream_value in stream {
if let Err(err) = signals.check(span) { if let Err(err) = signals.check(&span) {
tx.rollback().map_err(|e| ShellError::GenericError { tx.rollback().map_err(|e| ShellError::GenericError {
error: "Failed to rollback SQLite transaction".into(), error: "Failed to rollback SQLite transaction".into(),
msg: e.to_string(), msg: e.to_string(),
@ -257,7 +257,7 @@ fn insert_in_transaction(
let insert_statement = format!( let insert_statement = format!(
"INSERT INTO [{}] ({}) VALUES ({})", "INSERT INTO [{}] ({}) VALUES ({})",
table_name, table_name,
Itertools::intersperse(val.columns().map(|c| format!("`{}`", c)), ", ".to_string()) Itertools::intersperse(val.columns().map(|c| format!("`{c}`")), ", ".to_string())
.collect::<String>(), .collect::<String>(),
Itertools::intersperse(itertools::repeat_n("?", val.len()), ", ").collect::<String>(), Itertools::intersperse(itertools::repeat_n("?", val.len()), ", ").collect::<String>(),
); );
@ -353,6 +353,7 @@ fn nu_value_to_sqlite_type(val: &Value) -> Result<&'static str, ShellError> {
// intentionally enumerated so that any future types get handled // intentionally enumerated so that any future types get handled
Type::Any Type::Any
| Type::Block
| Type::CellPath | Type::CellPath
| Type::Closure | Type::Closure
| Type::Custom(_) | Type::Custom(_)
@ -381,7 +382,7 @@ fn get_columns_with_sqlite_types(
.map(|name| (format!("`{}`", name.0), name.1)) .map(|name| (format!("`{}`", name.0), name.1))
.any(|(name, _)| name == *c) .any(|(name, _)| name == *c)
{ {
columns.push((format!("`{}`", c), nu_value_to_sqlite_type(v)?)); columns.push((format!("`{c}`"), nu_value_to_sqlite_type(v)?));
} }
} }

View File

@ -112,16 +112,31 @@ impl SQLiteDatabase {
if self.path == PathBuf::from(MEMORY_DB) { if self.path == PathBuf::from(MEMORY_DB) {
open_connection_in_memory_custom() open_connection_in_memory_custom()
} else { } else {
Connection::open(&self.path).map_err(|e| ShellError::GenericError { let conn = Connection::open(&self.path).map_err(|e| ShellError::GenericError {
error: "Failed to open SQLite database from open_connection".into(), error: "Failed to open SQLite database from open_connection".into(),
msg: e.to_string(), msg: e.to_string(),
span: None, span: None,
help: None, help: None,
inner: vec![], inner: vec![],
}) })?;
conn.busy_handler(Some(SQLiteDatabase::sleeper))
.map_err(|e| ShellError::GenericError {
error: "Failed to set busy handler for SQLite database".into(),
msg: e.to_string(),
span: None,
help: None,
inner: vec![],
})?;
Ok(conn)
} }
} }
fn sleeper(attempts: i32) -> bool {
log::warn!("SQLITE_BUSY, retrying after 250ms (attempt {})", attempts);
std::thread::sleep(std::time::Duration::from_millis(250));
true
}
pub fn get_tables(&self, conn: &Connection) -> Result<Vec<DbTable>, SqliteError> { pub fn get_tables(&self, conn: &Connection) -> Result<Vec<DbTable>, SqliteError> {
let mut table_names = let mut table_names =
conn.prepare("SELECT name FROM sqlite_master WHERE type = 'table'")?; conn.prepare("SELECT name FROM sqlite_master WHERE type = 'table'")?;
@ -158,7 +173,7 @@ impl SQLiteDatabase {
filename: String, filename: String,
) -> Result<(), SqliteError> { ) -> Result<(), SqliteError> {
//vacuum main into 'c:\\temp\\foo.db' //vacuum main into 'c:\\temp\\foo.db'
conn.execute(&format!("vacuum main into '{}'", filename), [])?; conn.execute(&format!("vacuum main into '{filename}'"), [])?;
Ok(()) Ok(())
} }
@ -573,7 +588,7 @@ fn prepared_statement_to_nu_list(
let mut row_values = vec![]; let mut row_values = vec![];
for row_result in row_results { for row_result in row_results {
signals.check(call_span)?; signals.check(&call_span)?;
if let Ok(row_value) = row_result { if let Ok(row_value) = row_result {
row_values.push(row_value); row_values.push(row_value);
} }
@ -599,7 +614,7 @@ fn prepared_statement_to_nu_list(
let mut row_values = vec![]; let mut row_values = vec![];
for row_result in row_results { for row_result in row_results {
signals.check(call_span)?; signals.check(&call_span)?;
if let Ok(row_value) = row_result { if let Ok(row_value) = row_result {
row_values.push(row_value); row_values.push(row_value);
} }
@ -668,13 +683,23 @@ pub fn convert_sqlite_value_to_nu_value(value: ValueRef, span: Span) -> Value {
pub fn open_connection_in_memory_custom() -> Result<Connection, ShellError> { pub fn open_connection_in_memory_custom() -> Result<Connection, ShellError> {
let flags = OpenFlags::default(); let flags = OpenFlags::default();
Connection::open_with_flags(MEMORY_DB, flags).map_err(|e| ShellError::GenericError { let conn =
error: "Failed to open SQLite custom connection in memory".into(), Connection::open_with_flags(MEMORY_DB, flags).map_err(|e| ShellError::GenericError {
msg: e.to_string(), error: "Failed to open SQLite custom connection in memory".into(),
span: Some(Span::test_data()), msg: e.to_string(),
help: None, span: Some(Span::test_data()),
inner: vec![], help: None,
}) inner: vec![],
})?;
conn.busy_handler(Some(SQLiteDatabase::sleeper))
.map_err(|e| ShellError::GenericError {
error: "Failed to set busy handler for SQLite custom connection in memory".into(),
msg: e.to_string(),
span: Some(Span::test_data()),
help: None,
inner: vec![],
})?;
Ok(conn)
} }
pub fn open_connection_in_memory() -> Result<Connection, ShellError> { pub fn open_connection_in_memory() -> Result<Connection, ShellError> {

View File

@ -0,0 +1,64 @@
use nu_engine::command_prelude::*;
use nu_experimental::Status;
#[derive(Clone)]
pub struct DebugExperimentalOptions;
impl Command for DebugExperimentalOptions {
fn name(&self) -> &str {
"debug experimental-options"
}
fn signature(&self) -> Signature {
Signature::new(self.name())
.input_output_type(
Type::Nothing,
Type::Table(Box::from([
(String::from("identifier"), Type::String),
(String::from("enabled"), Type::Bool),
(String::from("status"), Type::String),
(String::from("description"), Type::String),
])),
)
.add_help()
.category(Category::Debug)
}
fn description(&self) -> &str {
"Show all experimental options."
}
fn run(
&self,
_engine_state: &EngineState,
_stack: &mut Stack,
call: &Call,
_input: PipelineData,
) -> Result<PipelineData, ShellError> {
Ok(PipelineData::Value(
Value::list(
nu_experimental::ALL
.iter()
.map(|option| {
Value::record(
nu_protocol::record! {
"identifier" => Value::string(option.identifier(), call.head),
"enabled" => Value::bool(option.get(), call.head),
"status" => Value::string(match option.status() {
Status::OptIn => "opt-in",
Status::OptOut => "opt-out",
Status::DeprecatedDiscard => "deprecated-discard",
Status::DeprecatedDefault => "deprecated-default"
}, call.head),
"description" => Value::string(option.description(), call.head),
},
call.head,
)
})
.collect(),
call.head,
),
None,
))
}
}

View File

@ -43,6 +43,17 @@ impl Command for Metadata {
let arg = call.positional_nth(stack, 0); let arg = call.positional_nth(stack, 0);
let head = call.head; let head = call.head;
if !matches!(input, PipelineData::Empty) {
if let Some(arg_expr) = arg {
return Err(ShellError::IncompatibleParameters {
left_message: "pipeline input was provided".into(),
left_span: head,
right_message: "but a positional metadata expression was also given".into(),
right_span: arg_expr.span,
});
}
}
match arg { match arg {
Some(Expression { Some(Expression {
expr: Expr::FullCellPath(full_cell_path), expr: Expr::FullCellPath(full_cell_path),
@ -56,7 +67,6 @@ impl Command for Metadata {
.. ..
} => { } => {
let origin = stack.get_var_with_origin(*var_id, *span)?; let origin = stack.get_var_with_origin(*var_id, *span)?;
Ok(build_metadata_record_value( Ok(build_metadata_record_value(
&origin, &origin,
input.metadata().as_ref(), input.metadata().as_ref(),
@ -87,10 +97,9 @@ impl Command for Metadata {
.into_pipeline_data(), .into_pipeline_data(),
) )
} }
None => Ok( None => {
Value::record(build_metadata_record(input.metadata().as_ref(), head), head) Ok(Value::record(build_metadata_record(&input, head), head).into_pipeline_data())
.into_pipeline_data(), }
),
} }
} }
@ -116,19 +125,7 @@ fn build_metadata_record_value(
head: Span, head: Span,
) -> Value { ) -> Value {
let mut record = Record::new(); let mut record = Record::new();
record.push("span", arg.span().into_value(head));
let span = arg.span();
record.push(
"span",
Value::record(
record! {
"start" => Value::int(span.start as i64,span),
"end" => Value::int(span.end as i64, span),
},
head,
),
);
Value::record(extend_record_with_metadata(record, metadata, head), head) Value::record(extend_record_with_metadata(record, metadata, head), head)
} }

View File

@ -42,10 +42,7 @@ impl Command for MetadataAccess {
// `ClosureEvalOnce` is not used as it uses `Stack::captures_to_stack` rather than // `ClosureEvalOnce` is not used as it uses `Stack::captures_to_stack` rather than
// `Stack::captures_to_stack_preserve_out_dest`. This command shouldn't collect streams // `Stack::captures_to_stack_preserve_out_dest`. This command shouldn't collect streams
let mut callee_stack = caller_stack.captures_to_stack_preserve_out_dest(closure.captures); let mut callee_stack = caller_stack.captures_to_stack_preserve_out_dest(closure.captures);
let metadata_record = Value::record( let metadata_record = Value::record(build_metadata_record(&input, call.head), call.head);
build_metadata_record(input.metadata().as_ref(), call.head),
call.head,
);
if let Some(var_id) = block.signature.get_positional(0).and_then(|var| var.var_id) { if let Some(var_id) = block.signature.get_positional(0).and_then(|var| var.var_id) {
callee_stack.add_var(var_id, metadata_record) callee_stack.add_var(var_id, metadata_record)
@ -58,12 +55,10 @@ impl Command for MetadataAccess {
fn examples(&self) -> Vec<Example> { fn examples(&self) -> Vec<Example> {
vec![Example { vec![Example {
description: "Access metadata and data from a stream together", description: "Access metadata and data from a stream together",
example: r#"{foo: bar} | to json --raw | metadata access {|meta| {in: $in, meta: $meta}}"#, example: r#"{foo: bar} | to json --raw | metadata access {|meta| {in: $in, content: $meta.content_type}}"#,
result: Some(Value::test_record(record! { result: Some(Value::test_record(record! {
"in" => Value::test_string(r#"{"foo":"bar"}"#), "in" => Value::test_string(r#"{"foo":"bar"}"#),
"meta" => Value::test_record(record! { "content" => Value::test_string(r#"application/json"#)
"content_type" => Value::test_string(r#"application/json"#)
})
})), })),
}] }]
} }

View File

@ -63,7 +63,18 @@ impl Command for MetadataSet {
match (ds_fp, ds_ls) { match (ds_fp, ds_ls) {
(Some(path), false) => metadata.data_source = DataSource::FilePath(path.into()), (Some(path), false) => metadata.data_source = DataSource::FilePath(path.into()),
(None, true) => metadata.data_source = DataSource::Ls, (None, true) => metadata.data_source = DataSource::Ls,
(Some(_), true) => (), // TODO: error here (Some(_), true) => {
return Err(ShellError::IncompatibleParameters {
left_message: "cannot use `--datasource-filepath`".into(),
left_span: call
.get_flag_span(stack, "datasource-filepath")
.expect("has flag"),
right_message: "with `--datasource-ls`".into(),
right_span: call
.get_flag_span(stack, "datasource-ls")
.expect("has flag"),
});
}
(None, false) => (), (None, false) => (),
} }
@ -79,15 +90,13 @@ impl Command for MetadataSet {
}, },
Example { Example {
description: "Set the metadata of a file path", description: "Set the metadata of a file path",
example: "'crates' | metadata set --datasource-filepath $'(pwd)/crates' | metadata", example: "'crates' | metadata set --datasource-filepath $'(pwd)/crates'",
result: None, result: None,
}, },
Example { Example {
description: "Set the metadata of a file path", description: "Set the metadata of a file path",
example: "'crates' | metadata set --content-type text/plain | metadata", example: "'crates' | metadata set --content-type text/plain | metadata | get content_type",
result: Some(Value::test_record(record! { result: Some(Value::test_string("text/plain")),
"content_type" => Value::test_string("text/plain"),
})),
}, },
] ]
} }

View File

@ -1,6 +1,7 @@
mod ast; mod ast;
mod debug_; mod debug_;
mod env; mod env;
mod experimental_options;
mod explain; mod explain;
mod info; mod info;
mod inspect; mod inspect;
@ -21,6 +22,7 @@ mod view_span;
pub use ast::Ast; pub use ast::Ast;
pub use debug_::Debug; pub use debug_::Debug;
pub use env::DebugEnv; pub use env::DebugEnv;
pub use experimental_options::DebugExperimentalOptions;
pub use explain::Explain; pub use explain::Explain;
pub use info::DebugInfo; pub use info::DebugInfo;
pub use inspect::Inspect; pub use inspect::Inspect;

View File

@ -1,4 +1,4 @@
use nu_protocol::{DataSource, PipelineMetadata, Record, Span, Value}; use nu_protocol::{DataSource, IntoValue, PipelineData, PipelineMetadata, Record, Span, Value};
pub fn extend_record_with_metadata( pub fn extend_record_with_metadata(
mut record: Record, mut record: Record,
@ -29,6 +29,10 @@ pub fn extend_record_with_metadata(
record record
} }
pub fn build_metadata_record(metadata: Option<&PipelineMetadata>, head: Span) -> Record { pub fn build_metadata_record(pipeline: &PipelineData, head: Span) -> Record {
extend_record_with_metadata(Record::new(), metadata, head) let mut record = Record::new();
if let Some(span) = pipeline.span() {
record.insert("span", span.into_value(head));
}
extend_record_with_metadata(record, pipeline.metadata().as_ref(), head)
} }

View File

@ -126,10 +126,10 @@ impl Command for ViewSource {
} }
let _ = write!(&mut final_contents, "--{}", n.long); let _ = write!(&mut final_contents, "--{}", n.long);
if let Some(short) = n.short { if let Some(short) = n.short {
let _ = write!(&mut final_contents, "(-{})", short); let _ = write!(&mut final_contents, "(-{short})");
} }
if let Some(arg) = &n.arg { if let Some(arg) = &n.arg {
let _ = write!(&mut final_contents, ": {}", arg); let _ = write!(&mut final_contents, ": {arg}");
} }
final_contents.push(' '); final_contents.push(' ');
} }
@ -146,7 +146,7 @@ impl Command for ViewSource {
let mut c = 0; let mut c = 0;
for (insig, outsig) in type_signatures { for (insig, outsig) in type_signatures {
c += 1; c += 1;
let s = format!("{} -> {}", insig, outsig); let s = format!("{insig} -> {outsig}");
final_contents.push_str(&s); final_contents.push_str(&s);
if c != len { if c != len {
final_contents.push_str(", ") final_contents.push_str(", ")

View File

@ -153,6 +153,7 @@ pub fn add_shell_command_context(mut engine_state: EngineState) -> EngineState {
Ast, Ast,
Debug, Debug,
DebugEnv, DebugEnv,
DebugExperimentalOptions,
DebugInfo, DebugInfo,
DebugProfile, DebugProfile,
Explain, Explain,
@ -188,6 +189,9 @@ pub fn add_shell_command_context(mut engine_state: EngineState) -> EngineState {
// Strings // Strings
bind_command! { bind_command! {
Ansi,
AnsiLink,
AnsiStrip,
Char, Char,
Decode, Decode,
Encode, Encode,
@ -250,9 +254,6 @@ pub fn add_shell_command_context(mut engine_state: EngineState) -> EngineState {
// Platform // Platform
#[cfg(feature = "os")] #[cfg(feature = "os")]
bind_command! { bind_command! {
Ansi,
AnsiLink,
AnsiStrip,
Clear, Clear,
Du, Du,
Input, Input,

View File

@ -85,7 +85,7 @@ impl Command for Glob {
result: None, result: None,
}, },
Example { Example {
description: "Search for files for folders that do not begin with c, C, b, M, or s", description: "Search for files or folders that do not begin with c, C, b, M, or s",
example: r#"glob "[!cCbMs]*""#, example: r#"glob "[!cCbMs]*""#,
result: None, result: None,
}, },
@ -329,7 +329,7 @@ fn glob_to_value(
) -> ListStream { ) -> ListStream {
let map_signals = signals.clone(); let map_signals = signals.clone();
let result = glob_results.filter_map(move |entry| { let result = glob_results.filter_map(move |entry| {
if let Err(err) = map_signals.check(span) { if let Err(err) = map_signals.check(&span) {
return Some(Value::error(err, span)); return Some(Value::error(err, span));
}; };
let file_type = entry.file_type(); let file_type = entry.file_type();

View File

@ -341,7 +341,7 @@ fn ls_for_one_pattern(
let mut paths_peek = paths.peekable(); let mut paths_peek = paths.peekable();
let no_matches = paths_peek.peek().is_none(); let no_matches = paths_peek.peek().is_none();
signals.check(call_span)?; signals.check(&call_span)?;
if no_matches { if no_matches {
return Err(ShellError::GenericError { return Err(ShellError::GenericError {
error: format!("No matches found for {:?}", path.item), error: format!("No matches found for {:?}", path.item),
@ -979,14 +979,14 @@ fn read_dir(
.read_dir() .read_dir()
.map_err(|err| IoError::new(err, span, f.clone()))? .map_err(|err| IoError::new(err, span, f.clone()))?
.map(move |d| { .map(move |d| {
signals_clone.check(span)?; signals_clone.check(&span)?;
d.map(|r| r.path()) d.map(|r| r.path())
.map_err(|err| IoError::new(err, span, f.clone())) .map_err(|err| IoError::new(err, span, f.clone()))
.map_err(ShellError::from) .map_err(ShellError::from)
}); });
if !use_threads { if !use_threads {
let mut collected = items.collect::<Vec<_>>(); let mut collected = items.collect::<Vec<_>>();
signals.check(span)?; signals.check(&span)?;
collected.sort_by(|a, b| match (a, b) { collected.sort_by(|a, b| match (a, b) {
(Ok(a), Ok(b)) => a.cmp(b), (Ok(a), Ok(b)) => a.cmp(b),
(Ok(_), Err(_)) => Ordering::Greater, (Ok(_), Err(_)) => Ordering::Greater,

View File

@ -112,8 +112,8 @@ impl Command for Mktemp {
.map_err(|_| ShellError::NonUtf8 { span })?, .map_err(|_| ShellError::NonUtf8 { span })?,
Err(e) => { Err(e) => {
return Err(ShellError::GenericError { return Err(ShellError::GenericError {
error: format!("{}", e), error: format!("{e}"),
msg: format!("{}", e), msg: format!("{e}"),
span: None, span: None,
help: None, help: None,
inner: vec![], inner: vec![],

View File

@ -198,7 +198,7 @@ impl Command for Open {
let converter = exts_opt.and_then(|exts| { let converter = exts_opt.and_then(|exts| {
exts.iter().find_map(|ext| { exts.iter().find_map(|ext| {
engine_state engine_state
.find_decl(format!("from {}", ext).as_bytes(), &[]) .find_decl(format!("from {ext}").as_bytes(), &[])
.map(|id| (id, ext.to_string())) .map(|id| (id, ext.to_string()))
}) })
}); });
@ -314,7 +314,7 @@ fn extract_extensions(filename: &str) -> Vec<String> {
if current_extension.is_empty() { if current_extension.is_empty() {
current_extension.push_str(part); current_extension.push_str(part);
} else { } else {
current_extension = format!("{}.{}", part, current_extension); current_extension = format!("{part}.{current_extension}");
} }
extensions.push(current_extension.clone()); extensions.push(current_extension.clone());
} }

View File

@ -454,7 +454,7 @@ fn rm(
}); });
for result in iter { for result in iter {
engine_state.signals().check(call.head)?; engine_state.signals().check(&call.head)?;
match result { match result {
Ok(None) => {} Ok(None) => {}
Ok(Some(msg)) => eprintln!("{msg}"), Ok(Some(msg)) => eprintln!("{msg}"),

View File

@ -91,7 +91,8 @@ impl Command for Save {
PipelineData::ByteStream(stream, metadata) => { PipelineData::ByteStream(stream, metadata) => {
check_saving_to_source_file(metadata.as_ref(), &path, stderr_path.as_ref())?; check_saving_to_source_file(metadata.as_ref(), &path, stderr_path.as_ref())?;
let (file, stderr_file) = get_files(&path, stderr_path.as_ref(), append, force)?; let (file, stderr_file) =
get_files(engine_state, &path, stderr_path.as_ref(), append, force)?;
let size = stream.known_size(); let size = stream.known_size();
let signals = engine_state.signals(); let signals = engine_state.signals();
@ -201,7 +202,8 @@ impl Command for Save {
stderr_path.as_ref(), stderr_path.as_ref(),
)?; )?;
let (mut file, _) = get_files(&path, stderr_path.as_ref(), append, force)?; let (mut file, _) =
get_files(engine_state, &path, stderr_path.as_ref(), append, force)?;
for val in ls { for val in ls {
file.write_all(&value_to_bytes(val)?) file.write_all(&value_to_bytes(val)?)
.map_err(&from_io_error)?; .map_err(&from_io_error)?;
@ -226,7 +228,8 @@ impl Command for Save {
input_to_bytes(input, Path::new(&path.item), raw, engine_state, stack, span)?; input_to_bytes(input, Path::new(&path.item), raw, engine_state, stack, span)?;
// Only open file after successful conversion // Only open file after successful conversion
let (mut file, _) = get_files(&path, stderr_path.as_ref(), append, force)?; let (mut file, _) =
get_files(engine_state, &path, stderr_path.as_ref(), append, force)?;
file.write_all(&bytes).map_err(&from_io_error)?; file.write_all(&bytes).map_err(&from_io_error)?;
file.flush().map_err(&from_io_error)?; file.flush().map_err(&from_io_error)?;
@ -422,13 +425,14 @@ fn prepare_path(
} }
} }
fn open_file(path: &Path, span: Span, append: bool) -> Result<File, ShellError> { fn open_file(
let file: Result<File, nu_protocol::shell_error::io::ErrorKind> = match (append, path.exists()) engine_state: &EngineState,
{ path: &Path,
(true, true) => std::fs::OpenOptions::new() span: Span,
.append(true) append: bool,
.open(path) ) -> Result<File, ShellError> {
.map_err(|err| err.into()), let file: std::io::Result<File> = match (append, path.exists()) {
(true, true) => std::fs::OpenOptions::new().append(true).open(path),
_ => { _ => {
// This is a temporary solution until `std::fs::File::create` is fixed on Windows (rust-lang/rust#134893) // This is a temporary solution until `std::fs::File::create` is fixed on Windows (rust-lang/rust#134893)
// A TOCTOU problem exists here, which may cause wrong error message to be shown // A TOCTOU problem exists here, which may cause wrong error message to be shown
@ -438,22 +442,51 @@ fn open_file(path: &Path, span: Span, append: bool) -> Result<File, ShellError>
deprecated, deprecated,
reason = "we don't get a IsADirectory error, so we need to provide it" reason = "we don't get a IsADirectory error, so we need to provide it"
)] )]
Err(nu_protocol::shell_error::io::ErrorKind::from_std( Err(std::io::ErrorKind::IsADirectory.into())
std::io::ErrorKind::IsADirectory,
))
} else { } else {
std::fs::File::create(path).map_err(|err| err.into()) std::fs::File::create(path)
} }
#[cfg(not(target_os = "windows"))] #[cfg(not(target_os = "windows"))]
std::fs::File::create(path).map_err(|err| err.into()) std::fs::File::create(path)
} }
}; };
file.map_err(|err_kind| ShellError::Io(IoError::new(err_kind, span, PathBuf::from(path)))) match file {
Ok(file) => Ok(file),
Err(err) => {
// In caase of NotFound, search for the missing parent directory.
// This also presents a TOCTOU (or TOUTOC, technically?)
if err.kind() == std::io::ErrorKind::NotFound {
if let Some(missing_component) =
path.ancestors().skip(1).filter(|dir| !dir.exists()).last()
{
// By looking at the postfix to remove, rather than the prefix
// to keep, we are able to handle relative paths too.
let components_to_remove = path
.strip_prefix(missing_component)
.expect("Stripping ancestor from a path should never fail")
.as_os_str()
.as_encoded_bytes();
return Err(ShellError::Io(IoError::new(
ErrorKind::DirectoryNotFound,
engine_state
.span_match_postfix(span, components_to_remove)
.map(|(pre, _post)| pre)
.unwrap_or(span),
PathBuf::from(missing_component),
)));
}
}
Err(ShellError::Io(IoError::new(err, span, PathBuf::from(path))))
}
}
} }
/// Get output file and optional stderr file /// Get output file and optional stderr file
fn get_files( fn get_files(
engine_state: &EngineState,
path: &Spanned<PathBuf>, path: &Spanned<PathBuf>,
stderr_path: Option<&Spanned<PathBuf>>, stderr_path: Option<&Spanned<PathBuf>>,
append: bool, append: bool,
@ -467,7 +500,7 @@ fn get_files(
.transpose()?; .transpose()?;
// Only if both files can be used open and possibly truncate them // Only if both files can be used open and possibly truncate them
let file = open_file(path, path_span, append)?; let file = open_file(engine_state, path, path_span, append)?;
let stderr_file = stderr_path_and_span let stderr_file = stderr_path_and_span
.map(|(stderr_path, stderr_path_span)| { .map(|(stderr_path, stderr_path_span)| {
@ -480,7 +513,7 @@ fn get_files(
inner: vec![], inner: vec![],
}) })
} else { } else {
open_file(stderr_path, stderr_path_span, append) open_file(engine_state, stderr_path, stderr_path_span, append)
} }
}) })
.transpose()?; .transpose()?;
@ -510,7 +543,7 @@ fn stream_to_file(
let mut reader = BufReader::new(source); let mut reader = BufReader::new(source);
let res = loop { let res = loop {
if let Err(err) = signals.check(span) { if let Err(err) = signals.check(&span) {
bar.abandoned_msg("# Cancelled #".to_owned()); bar.abandoned_msg("# Cancelled #".to_owned());
return Err(err); return Err(err);
} }

View File

@ -49,7 +49,8 @@ impl Command for Start {
} }
// If it's not a URL, treat it as a file path // If it's not a URL, treat it as a file path
let cwd = engine_state.cwd(Some(stack))?; let cwd = engine_state.cwd(Some(stack))?;
let full_path = cwd.join(path_no_whitespace); let full_path = nu_path::expand_path_with(path_no_whitespace, &cwd, true);
// Check if the path exists or if it's a valid file/directory // Check if the path exists or if it's a valid file/directory
if full_path.exists() { if full_path.exists() {
open_path(full_path, engine_state, stack, path.span)?; open_path(full_path, engine_state, stack, path.span)?;

View File

@ -272,8 +272,8 @@ impl Command for UCp {
uu_cp::Error::NotAllFilesCopied => {} uu_cp::Error::NotAllFilesCopied => {}
_ => { _ => {
return Err(ShellError::GenericError { return Err(ShellError::GenericError {
error: format!("{}", error), error: format!("{error}"),
msg: format!("{}", error), msg: format!("{error}"),
span: None, span: None,
help: None, help: None,
inner: vec![], inner: vec![],
@ -373,7 +373,7 @@ fn parse_and_set_attribute(
"xattr" => &mut attribute.xattr, "xattr" => &mut attribute.xattr,
_ => { _ => {
return Err(ShellError::IncompatibleParametersSingle { return Err(ShellError::IncompatibleParametersSingle {
msg: format!("--preserve flag got an unexpected attribute \"{}\"", val), msg: format!("--preserve flag got an unexpected attribute \"{val}\""),
span: value.span(), span: value.span(),
}); });
} }

View File

@ -77,8 +77,8 @@ impl Command for UMkdir {
for dir in directories { for dir in directories {
if let Err(error) = mkdir(&dir, IS_RECURSIVE, get_mode(), is_verbose) { if let Err(error) = mkdir(&dir, IS_RECURSIVE, get_mode(), is_verbose) {
return Err(ShellError::GenericError { return Err(ShellError::GenericError {
error: format!("{}", error), error: format!("{error}"),
msg: format!("{}", error), msg: format!("{error}"),
span: None, span: None,
help: None, help: None,
inner: vec![], inner: vec![],

View File

@ -195,8 +195,8 @@ impl Command for UMv {
}; };
if let Err(error) = uu_mv::mv(&files, &options) { if let Err(error) = uu_mv::mv(&files, &options) {
return Err(ShellError::GenericError { return Err(ShellError::GenericError {
error: format!("{}", error), error: format!("{error}"),
msg: format!("{}", error), msg: format!("{error}"),
span: None, span: None,
help: None, help: None,
inner: Vec::new(), inner: Vec::new(),

View File

@ -220,7 +220,7 @@ impl Command for UTouch {
inner: Vec::new(), inner: Vec::new(),
}, },
TouchError::InvalidDateFormat(date) => ShellError::IncorrectValue { TouchError::InvalidDateFormat(date) => ShellError::IncorrectValue {
msg: format!("Invalid date: {}", date), msg: format!("Invalid date: {date}"),
val_span: date_span.expect("touch should've been given a date"), val_span: date_span.expect("touch should've been given a date"),
call_span: call.head, call_span: call.head,
}, },

View File

@ -6,11 +6,7 @@ use notify_debouncer_full::{
}, },
}; };
use nu_engine::{ClosureEval, command_prelude::*}; use nu_engine::{ClosureEval, command_prelude::*};
use nu_protocol::{ use nu_protocol::{engine::Closure, report_shell_error, shell_error::io::IoError};
engine::{Closure, StateWorkingSet},
format_shell_error,
shell_error::io::IoError,
};
use std::{ use std::{
path::PathBuf, path::PathBuf,
sync::mpsc::{RecvTimeoutError, channel}, sync::mpsc::{RecvTimeoutError, channel},
@ -203,14 +199,9 @@ impl Command for Watch {
.run_with_input(PipelineData::Empty); .run_with_input(PipelineData::Empty);
match result { match result {
Ok(val) => { Ok(val) => val.print_table(engine_state, stack, false, false)?,
val.print_table(engine_state, stack, false, false)?; Err(err) => report_shell_error(engine_state, &err),
} };
Err(err) => {
let working_set = StateWorkingSet::new(engine_state);
eprintln!("{}", format_shell_error(&working_set, &err));
}
}
} }
Ok(()) Ok(())

View File

@ -2,7 +2,7 @@ use std::{borrow::Cow, ops::Deref};
use nu_engine::{ClosureEval, command_prelude::*}; use nu_engine::{ClosureEval, command_prelude::*};
use nu_protocol::{ use nu_protocol::{
ListStream, Signals, ListStream, ReportMode, ShellWarning, Signals,
ast::{Expr, Expression}, ast::{Expr, Expression},
report_shell_warning, report_shell_warning,
}; };
@ -82,7 +82,7 @@ impl Command for Default {
}, },
Example { Example {
description: "Get the env value of `MY_ENV` with a default value 'abc' if not present", description: "Get the env value of `MY_ENV` with a default value 'abc' if not present",
example: "$env | get --ignore-errors MY_ENV | default 'abc'", example: "$env | get --optional MY_ENV | default 'abc'",
result: Some(Value::test_string("abc")), result: Some(Value::test_string("abc")),
}, },
Example { Example {
@ -213,7 +213,7 @@ fn default(
|| (default_when_empty || (default_when_empty
&& matches!(input, PipelineData::Value(ref value, _) if value.is_empty())) && matches!(input, PipelineData::Value(ref value, _) if value.is_empty()))
{ {
default_value.pipeline_data() default_value.single_run_pipeline_data()
} else if default_when_empty && matches!(input, PipelineData::ListStream(..)) { } else if default_when_empty && matches!(input, PipelineData::ListStream(..)) {
let PipelineData::ListStream(ls, metadata) = input else { let PipelineData::ListStream(ls, metadata) = input else {
unreachable!() unreachable!()
@ -221,7 +221,7 @@ fn default(
let span = ls.span(); let span = ls.span();
let mut stream = ls.into_inner().peekable(); let mut stream = ls.into_inner().peekable();
if stream.peek().is_none() { if stream.peek().is_none() {
return default_value.pipeline_data(); return default_value.single_run_pipeline_data();
} }
// stream's internal state already preserves the original signals config, so if this // stream's internal state already preserves the original signals config, so if this
@ -278,8 +278,14 @@ impl DefaultValue {
} }
} }
fn pipeline_data(&mut self) -> Result<PipelineData, ShellError> { /// Used when we know the value won't need to be cached to allow streaming.
self.value().map(|x| x.into_pipeline_data()) fn single_run_pipeline_data(self) -> Result<PipelineData, ShellError> {
match self {
DefaultValue::Uncalculated(mut closure) => {
closure.item.run_with_input(PipelineData::Empty)
}
DefaultValue::Calculated(val) => Ok(val.into_pipeline_data()),
}
} }
} }
@ -323,24 +329,25 @@ fn closure_variable_warning(
(Value::Closure { .. }, true) => { (Value::Closure { .. }, true) => {
let span_contents = String::from_utf8_lossy(engine_state.get_span_contents(span)); let span_contents = String::from_utf8_lossy(engine_state.get_span_contents(span));
let carapace_suggestion = "re-run carapace init with version v1.3.3 or later\nor, change this to `{ $carapace_completer }`"; let carapace_suggestion = "re-run carapace init with version v1.3.3 or later\nor, change this to `{ $carapace_completer }`";
let suggestion = match span_contents { let label = match span_contents {
Cow::Borrowed("$carapace_completer") => carapace_suggestion.to_string(), Cow::Borrowed("$carapace_completer") => carapace_suggestion.to_string(),
Cow::Owned(s) if s.deref() == "$carapace_completer" => { Cow::Owned(s) if s.deref() == "$carapace_completer" => {
carapace_suggestion.to_string() carapace_suggestion.to_string()
} }
_ => format!("change this to {{ {} }}", span_contents).to_string(), _ => format!("change this to {{ {span_contents} }}").to_string(),
}; };
report_shell_warning( report_shell_warning(
engine_state, engine_state,
&ShellError::DeprecationWarning { &ShellWarning::Deprecated {
deprecation_type: "Behavior", dep_type: "Behavior".to_string(),
suggestion, label,
span, span,
help: Some( help: Some(
r"Since 0.105.0, closure literals passed to default are lazily evaluated, rather than returned as a value. r"Since 0.105.0, closure literals passed to default are lazily evaluated, rather than returned as a value.
In a future release, closures passed by variable will also be lazily evaluated.", In a future release, closures passed by variable will also be lazily evaluated.".to_string(),
), ),
report_mode: ReportMode::FirstUse,
}, },
); );

View File

@ -1,6 +1,6 @@
use itertools::Either;
use nu_engine::command_prelude::*; use nu_engine::command_prelude::*;
use nu_protocol::{PipelineIterator, Range}; use nu_protocol::{PipelineIterator, Range};
use std::collections::VecDeque;
use std::ops::Bound; use std::ops::Bound;
#[derive(Clone)] #[derive(Clone)]
@ -18,13 +18,11 @@ impl Command for DropNth {
(Type::list(Type::Any), Type::list(Type::Any)), (Type::list(Type::Any), Type::list(Type::Any)),
]) ])
.allow_variants_without_examples(true) .allow_variants_without_examples(true)
.required( .rest(
"row number or row range", "rest",
// FIXME: we can make this accept either Int or Range when we can compose SyntaxShapes
SyntaxShape::Any, SyntaxShape::Any,
"The number of the row to drop or a range to drop consecutive rows.", "The row numbers or ranges to drop.",
) )
.rest("rest", SyntaxShape::Any, "The number of the row to drop.")
.category(Category::Filters) .category(Category::Filters)
} }
@ -103,110 +101,125 @@ impl Command for DropNth {
) -> Result<PipelineData, ShellError> { ) -> Result<PipelineData, ShellError> {
let head = call.head; let head = call.head;
let metadata = input.metadata(); let metadata = input.metadata();
let number_or_range = extract_int_or_range(engine_state, stack, call)?;
let rows = match number_or_range.item { let args: Vec<Value> = call.rest(engine_state, stack, 0)?;
Either::Left(row_number) => { if args.is_empty() {
let and_rows: Vec<Spanned<i64>> = call.rest(engine_state, stack, 1)?; return Ok(input);
let mut rows: Vec<_> = and_rows.into_iter().map(|x| x.item as usize).collect(); }
rows.push(row_number as usize);
rows.sort_unstable();
rows
}
Either::Right(Range::FloatRange(_)) => {
return Err(ShellError::UnsupportedInput {
msg: "float range".into(),
input: "value originates from here".into(),
msg_span: head,
input_span: number_or_range.span,
});
}
Either::Right(Range::IntRange(range)) => {
// check for negative range inputs, e.g., (2..-5)
let end_negative = match range.end() {
Bound::Included(end) | Bound::Excluded(end) => end < 0,
Bound::Unbounded => false,
};
if range.start().is_negative() || end_negative {
return Err(ShellError::UnsupportedInput {
msg: "drop nth accepts only positive ints".into(),
input: "value originates from here".into(),
msg_span: head,
input_span: number_or_range.span,
});
}
// check if the upper bound is smaller than the lower bound, e.g., do not accept 4..2
if range.step() < 0 {
return Err(ShellError::UnsupportedInput {
msg: "The upper bound needs to be equal or larger to the lower bound"
.into(),
input: "value originates from here".into(),
msg_span: head,
input_span: number_or_range.span,
});
}
let start = range.start() as usize; let (rows_to_drop, min_unbounded_start) = get_rows_to_drop(&args, head)?;
let end = match range.end() { let input = if let Some(cutoff) = min_unbounded_start {
Bound::Included(end) => end as usize, input
Bound::Excluded(end) => (end - 1) as usize, .into_iter()
Bound::Unbounded => { .take(cutoff)
return Ok(input .into_pipeline_data_with_metadata(
.into_iter() head,
.take(start) engine_state.signals().clone(),
.into_pipeline_data_with_metadata( metadata.clone(),
head, )
engine_state.signals().clone(), } else {
metadata, input
));
}
};
let end = if let PipelineData::Value(Value::List { vals, .. }, _) = &input {
end.min(vals.len() - 1)
} else {
end
};
(start..=end).collect()
}
}; };
Ok(DropNthIterator { Ok(DropNthIterator {
input: input.into_iter(), input: input.into_iter(),
rows, rows: rows_to_drop,
current: 0, current: 0,
} }
.into_pipeline_data_with_metadata(head, engine_state.signals().clone(), metadata)) .into_pipeline_data_with_metadata(head, engine_state.signals().clone(), metadata))
} }
} }
fn extract_int_or_range( fn get_rows_to_drop(
engine_state: &EngineState, args: &[Value],
stack: &mut Stack, head: Span,
call: &Call, ) -> Result<(VecDeque<usize>, Option<usize>), ShellError> {
) -> Result<Spanned<Either<i64, Range>>, ShellError> { let mut rows_to_drop = Vec::new();
let value: Value = call.req(engine_state, stack, 0)?; let mut min_unbounded_start: Option<usize> = None;
let int_opt = value.as_int().map(Either::Left).ok(); for value in args {
let range_opt = value.as_range().map(Either::Right).ok(); if let Ok(i) = value.as_int() {
if i < 0 {
return Err(ShellError::UnsupportedInput {
msg: "drop nth accepts only positive ints".into(),
input: "value originates from here".into(),
msg_span: head,
input_span: value.span(),
});
}
rows_to_drop.push(i as usize);
} else if let Ok(range) = value.as_range() {
match range {
Range::IntRange(range) => {
let start = range.start();
if start < 0 {
return Err(ShellError::UnsupportedInput {
msg: "drop nth accepts only positive ints".into(),
input: "value originates from here".into(),
msg_span: head,
input_span: value.span(),
});
}
int_opt match range.end() {
.or(range_opt) Bound::Included(end) => {
.ok_or_else(|| ShellError::TypeMismatch { if end < start {
err_message: "int or range".into(), return Err(ShellError::UnsupportedInput {
span: value.span(), msg: "The upper bound must be greater than or equal to the lower bound".into(),
}) input: "value originates from here".into(),
.map(|either| Spanned { msg_span: head,
item: either, input_span: value.span(),
span: value.span(), });
}) }
rows_to_drop.extend((start as usize)..=(end as usize));
}
Bound::Excluded(end) => {
if end <= start {
return Err(ShellError::UnsupportedInput {
msg: "The upper bound must be greater than the lower bound"
.into(),
input: "value originates from here".into(),
msg_span: head,
input_span: value.span(),
});
}
rows_to_drop.extend((start as usize)..(end as usize));
}
Bound::Unbounded => {
let start_usize = start as usize;
min_unbounded_start = Some(
min_unbounded_start.map_or(start_usize, |s| s.min(start_usize)),
);
}
}
}
Range::FloatRange(_) => {
return Err(ShellError::UnsupportedInput {
msg: "float range not supported".into(),
input: "value originates from here".into(),
msg_span: head,
input_span: value.span(),
});
}
}
} else {
return Err(ShellError::TypeMismatch {
err_message: "Expected int or range".into(),
span: value.span(),
});
}
}
rows_to_drop.sort_unstable();
rows_to_drop.dedup();
Ok((VecDeque::from(rows_to_drop), min_unbounded_start))
} }
struct DropNthIterator { struct DropNthIterator {
input: PipelineIterator, input: PipelineIterator,
rows: Vec<usize>, rows: VecDeque<usize>,
current: usize, current: usize,
} }
@ -215,9 +228,9 @@ impl Iterator for DropNthIterator {
fn next(&mut self) -> Option<Self::Item> { fn next(&mut self) -> Option<Self::Item> {
loop { loop {
if let Some(row) = self.rows.first() { if let Some(row) = self.rows.front() {
if self.current == *row { if self.current == *row {
self.rows.remove(0); self.rows.pop_front();
self.current += 1; self.current += 1;
let _ = self.input.next(); let _ = self.input.next();
continue; continue;

View File

@ -47,7 +47,7 @@ impl Command for Find {
.named( .named(
"columns", "columns",
SyntaxShape::List(Box::new(SyntaxShape::String)), SyntaxShape::List(Box::new(SyntaxShape::String)),
"column names to be searched (with rest parameter, not regex yet)", "column names to be searched",
Some('c'), Some('c'),
) )
.switch( .switch(
@ -163,7 +163,12 @@ impl Command for Find {
example: r#"[["Larry", "Moe"], ["Victor", "Marina"]] | find --regex "rr""#, example: r#"[["Larry", "Moe"], ["Victor", "Marina"]] | find --regex "rr""#,
result: Some(Value::list( result: Some(Value::list(
vec![Value::list( vec![Value::list(
vec![Value::test_string("Larry"), Value::test_string("Moe")], vec![
Value::test_string(
"\u{1b}[37mLa\u{1b}[0m\u{1b}[41;37mrr\u{1b}[0m\u{1b}[37my\u{1b}[0m",
),
Value::test_string("Moe"),
],
Span::test_data(), Span::test_data(),
)], )],
Span::test_data(), Span::test_data(),
@ -344,7 +349,10 @@ fn get_match_pattern_from_arguments(
// map functions // map functions
fn highlight_matches_in_string(pattern: &MatchPattern, val: String) -> String { fn highlight_matches_in_string(pattern: &MatchPattern, val: String) -> String {
// strip haystack to remove existing ansi style if !pattern.regex.is_match(&val).unwrap_or(false) {
return val;
}
let stripped_val = nu_utils::strip_ansi_string_unlikely(val); let stripped_val = nu_utils::strip_ansi_string_unlikely(val);
let mut last_match_end = 0; let mut last_match_end = 0;
let mut highlighted = String::new(); let mut highlighted = String::new();
@ -390,7 +398,7 @@ fn highlight_matches_in_string(pattern: &MatchPattern, val: String) -> String {
highlighted highlighted
} }
fn highlight_matches_in_record_or_value( fn highlight_matches_in_value(
pattern: &MatchPattern, pattern: &MatchPattern,
value: Value, value: Value,
columns_to_search: &[String], columns_to_search: &[String],
@ -412,16 +420,16 @@ fn highlight_matches_in_record_or_value(
continue; continue;
} }
if let Value::String { val: val_str, .. } = val { *val = highlight_matches_in_value(pattern, std::mem::take(val), &[]);
if pattern.regex.is_match(val_str).unwrap_or(false) {
let val_str = std::mem::take(val_str);
*val = highlight_matches_in_string(pattern, val_str).into_value(span)
}
}
} }
Value::record(record, span) Value::record(record, span)
} }
Value::List { vals, .. } => vals
.into_iter()
.map(|item| highlight_matches_in_value(pattern, item, &[]))
.collect::<Vec<Value>>()
.into_value(span),
Value::String { val, .. } => highlight_matches_in_string(pattern, val).into_value(span), Value::String { val, .. } => highlight_matches_in_string(pattern, val).into_value(span),
_ => value, _ => value,
} }
@ -444,24 +452,22 @@ fn find_in_pipelinedata(
PipelineData::Value(_, _) => input PipelineData::Value(_, _) => input
.filter( .filter(
move |value| { move |value| {
record_or_value_should_be_printed(&pattern, value, &columns_to_search, &config) value_should_be_printed(&pattern, value, &columns_to_search, &config)
!= pattern.invert
}, },
engine_state.signals(), engine_state.signals(),
)? )?
.map( .map(
move |x| { move |x| highlight_matches_in_value(&map_pattern, x, &map_columns_to_search),
highlight_matches_in_record_or_value(&map_pattern, x, &map_columns_to_search)
},
engine_state.signals(), engine_state.signals(),
), ),
PipelineData::ListStream(stream, metadata) => { PipelineData::ListStream(stream, metadata) => {
let stream = stream.modify(|iter| { let stream = stream.modify(|iter| {
iter.filter(move |value| { iter.filter(move |value| {
record_or_value_should_be_printed(&pattern, value, &columns_to_search, &config) value_should_be_printed(&pattern, value, &columns_to_search, &config)
}) != pattern.invert
.map(move |x| {
highlight_matches_in_record_or_value(&map_pattern, x, &map_columns_to_search)
}) })
.map(move |x| highlight_matches_in_value(&map_pattern, x, &map_columns_to_search))
}); });
Ok(PipelineData::ListStream(stream, metadata)) Ok(PipelineData::ListStream(stream, metadata))
@ -495,7 +501,12 @@ fn string_should_be_printed(pattern: &MatchPattern, value: &str) -> bool {
pattern.regex.is_match(value).unwrap_or(false) pattern.regex.is_match(value).unwrap_or(false)
} }
fn value_should_be_printed(pattern: &MatchPattern, value: &Value, config: &Config) -> bool { fn value_should_be_printed(
pattern: &MatchPattern,
value: &Value,
columns_to_search: &[String],
config: &Config,
) -> bool {
let lower_value = value.to_expanded_string("", config).to_lowercase(); let lower_value = value.to_expanded_string("", config).to_lowercase();
match value { match value {
@ -507,8 +518,7 @@ fn value_should_be_printed(pattern: &MatchPattern, value: &Value, config: &Confi
| Value::Range { .. } | Value::Range { .. }
| Value::Float { .. } | Value::Float { .. }
| Value::Closure { .. } | Value::Closure { .. }
| Value::Nothing { .. } | Value::Nothing { .. } => {
| Value::Error { .. } => {
if !pattern.lower_terms.is_empty() { if !pattern.lower_terms.is_empty() {
// look for exact match when searching with terms // look for exact match when searching with terms
pattern pattern
@ -519,37 +529,25 @@ fn value_should_be_printed(pattern: &MatchPattern, value: &Value, config: &Confi
string_should_be_printed(pattern, &lower_value) string_should_be_printed(pattern, &lower_value)
} }
} }
Value::Glob { .. } Value::Glob { .. } | Value::CellPath { .. } | Value::Custom { .. } => {
| Value::List { .. } string_should_be_printed(pattern, &lower_value)
| Value::CellPath { .. } }
| Value::Record { .. }
| Value::Custom { .. } => string_should_be_printed(pattern, &lower_value),
Value::String { val, .. } => string_should_be_printed(pattern, val), Value::String { val, .. } => string_should_be_printed(pattern, val),
Value::Binary { .. } => false, Value::List { vals, .. } => vals
} .iter()
} .any(|item| value_should_be_printed(pattern, item, &[], config)),
fn record_or_value_should_be_printed(
pattern: &MatchPattern,
value: &Value,
columns_to_search: &[String],
config: &Config,
) -> bool {
let match_found = match value {
Value::Record { val: record, .. } => { Value::Record { val: record, .. } => {
// Only perform column selection if given columns.
let col_select = !columns_to_search.is_empty(); let col_select = !columns_to_search.is_empty();
record.iter().any(|(col, val)| { record.iter().any(|(col, val)| {
if col_select && !columns_to_search.contains(col) { if col_select && !columns_to_search.contains(col) {
return false; return false;
} }
value_should_be_printed(pattern, val, config) value_should_be_printed(pattern, val, &[], config)
}) })
} }
_ => value_should_be_printed(pattern, value, config), Value::Binary { .. } => false,
}; Value::Error { .. } => true,
}
match_found != pattern.invert
} }
// utility // utility
@ -574,6 +572,46 @@ fn split_string_if_multiline(input: PipelineData, head_span: Span) -> PipelineDa
} }
} }
/// function for using find from other commands
pub fn find_internal(
input: PipelineData,
engine_state: &EngineState,
stack: &mut Stack,
search_term: &str,
columns_to_search: &[&str],
highlight: bool,
) -> Result<PipelineData, ShellError> {
let span = input.span().unwrap_or(Span::unknown());
let style_computer = StyleComputer::from_config(engine_state, stack);
let string_style = style_computer.compute("string", &Value::string("search result", span));
let highlight_style =
style_computer.compute("search_result", &Value::string("search result", span));
let regex_str = format!("(?i){}", escape(search_term));
let regex = Regex::new(regex_str.as_str()).map_err(|e| ShellError::TypeMismatch {
err_message: format!("invalid regex: {e}"),
span: Span::unknown(),
})?;
let pattern = MatchPattern {
regex,
lower_terms: vec![search_term.to_lowercase()],
highlight,
invert: false,
string_style,
highlight_style,
};
let columns_to_search = columns_to_search
.iter()
.map(|str| String::from(*str))
.collect();
find_in_pipelinedata(pattern, columns_to_search, engine_state, stack, input)
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;

View File

@ -40,9 +40,14 @@ If multiple cell paths are given, this will produce a list of values."#
"The cell path to the data.", "The cell path to the data.",
) )
.rest("rest", SyntaxShape::CellPath, "Additional cell paths.") .rest("rest", SyntaxShape::CellPath, "Additional cell paths.")
.switch(
"optional",
"make all cell path members optional (returns `null` for missing values)",
Some('o'),
)
.switch( .switch(
"ignore-errors", "ignore-errors",
"ignore missing data (make all cell path members optional)", "ignore missing data (make all cell path members optional) (deprecated)",
Some('i'), Some('i'),
) )
.switch( .switch(
@ -109,13 +114,14 @@ If multiple cell paths are given, this will produce a list of values."#
) -> Result<PipelineData, ShellError> { ) -> Result<PipelineData, ShellError> {
let cell_path: CellPath = call.req_const(working_set, 0)?; let cell_path: CellPath = call.req_const(working_set, 0)?;
let rest: Vec<CellPath> = call.rest_const(working_set, 1)?; let rest: Vec<CellPath> = call.rest_const(working_set, 1)?;
let ignore_errors = call.has_flag_const(working_set, "ignore-errors")?; let optional = call.has_flag_const(working_set, "optional")?
|| call.has_flag_const(working_set, "ignore-errors")?;
let metadata = input.metadata(); let metadata = input.metadata();
action( action(
input, input,
cell_path, cell_path,
rest, rest,
ignore_errors, optional,
working_set.permanent().signals().clone(), working_set.permanent().signals().clone(),
call.head, call.head,
) )
@ -131,13 +137,14 @@ If multiple cell paths are given, this will produce a list of values."#
) -> Result<PipelineData, ShellError> { ) -> Result<PipelineData, ShellError> {
let cell_path: CellPath = call.req(engine_state, stack, 0)?; let cell_path: CellPath = call.req(engine_state, stack, 0)?;
let rest: Vec<CellPath> = call.rest(engine_state, stack, 1)?; let rest: Vec<CellPath> = call.rest(engine_state, stack, 1)?;
let ignore_errors = call.has_flag(engine_state, stack, "ignore-errors")?; let optional = call.has_flag(engine_state, stack, "optional")?
|| call.has_flag(engine_state, stack, "ignore-errors")?;
let metadata = input.metadata(); let metadata = input.metadata();
action( action(
input, input,
cell_path, cell_path,
rest, rest,
ignore_errors, optional,
engine_state.signals().clone(), engine_state.signals().clone(),
call.head, call.head,
) )
@ -152,6 +159,13 @@ If multiple cell paths are given, this will produce a list of values."#
since: Some("0.105.0".into()), since: Some("0.105.0".into()),
expected_removal: None, expected_removal: None,
help: Some("Cell-paths are now case-sensitive by default.\nTo access fields case-insensitively, add `!` after the relevant path member.".into()) help: Some("Cell-paths are now case-sensitive by default.\nTo access fields case-insensitively, add `!` after the relevant path member.".into())
},
DeprecationEntry {
ty: DeprecationType::Flag("ignore-errors".into()),
report_mode: ReportMode::FirstUse,
since: Some("0.106.0".into()),
expected_removal: None,
help: Some("This flag has been renamed to `--optional (-o)` to better reflect its behavior.".into())
} }
] ]
} }
@ -161,29 +175,19 @@ fn action(
input: PipelineData, input: PipelineData,
mut cell_path: CellPath, mut cell_path: CellPath,
mut rest: Vec<CellPath>, mut rest: Vec<CellPath>,
ignore_errors: bool, optional: bool,
signals: Signals, signals: Signals,
span: Span, span: Span,
) -> Result<PipelineData, ShellError> { ) -> Result<PipelineData, ShellError> {
if ignore_errors { if optional {
cell_path.make_optional(); cell_path.make_optional();
for path in &mut rest { for path in &mut rest {
path.make_optional(); path.make_optional();
} }
} }
match input { if let PipelineData::Empty = input {
PipelineData::Empty => return Err(ShellError::PipelineEmpty { dst_span: span }), return Err(ShellError::PipelineEmpty { dst_span: span });
// Allow chaining of get -i
PipelineData::Value(val @ Value::Nothing { .. }, ..) if !ignore_errors => {
return Err(ShellError::OnlySupportsThisInputType {
exp_input_type: "table or record".into(),
wrong_type: "nothing".into(),
dst_span: span,
src_span: val.span(),
});
}
_ => (),
} }
if rest.is_empty() { if rest.is_empty() {

View File

@ -231,7 +231,12 @@ pub fn group_by(
let values: Vec<Value> = input.into_iter().collect(); let values: Vec<Value> = input.into_iter().collect();
if values.is_empty() { if values.is_empty() {
return Ok(Value::record(Record::new(), head).into_pipeline_data()); let val = if to_table {
Value::list(Vec::new(), head)
} else {
Value::record(Record::new(), head)
};
return Ok(val.into_pipeline_data());
} }
let grouped = match &groupers[..] { let grouped = match &groupers[..] {

View File

@ -379,10 +379,7 @@ fn merge_records(left: &Record, right: &Record, shared_key: Option<&str>) -> Rec
let k_shared = shared_key == Some(k.as_str()); let k_shared = shared_key == Some(k.as_str());
// Do not output shared join key twice // Do not output shared join key twice
if !(k_seen && k_shared) { if !(k_seen && k_shared) {
record.push( record.push(if k_seen { format!("{k}_") } else { k.clone() }, v.clone());
if k_seen { format!("{}_", k) } else { k.clone() },
v.clone(),
);
} }
} }
record record

View File

@ -100,7 +100,7 @@ impl Command for Last {
let mut buf = VecDeque::new(); let mut buf = VecDeque::new();
for row in iterator { for row in iterator {
engine_state.signals().check(head)?; engine_state.signals().check(&head)?;
if buf.len() == rows { if buf.len() == rows {
buf.pop_front(); buf.pop_front();
} }

View File

@ -70,7 +70,7 @@ pub use empty::empty;
pub use enumerate::Enumerate; pub use enumerate::Enumerate;
pub use every::Every; pub use every::Every;
pub use filter::Filter; pub use filter::Filter;
pub use find::Find; pub use find::{Find, find_internal};
pub use first::First; pub use first::First;
pub use flatten::Flatten; pub use flatten::Flatten;
pub use get::Get; pub use get::Get;

View File

@ -119,7 +119,7 @@ impl Command for Reduce {
let mut closure = ClosureEval::new(engine_state, stack, closure); let mut closure = ClosureEval::new(engine_state, stack, closure);
for value in iter { for value in iter {
engine_state.signals().check(head)?; engine_state.signals().check(&head)?;
acc = closure acc = closure
.add_arg(value) .add_arg(value)
.add_arg(acc.clone()) .add_arg(acc.clone())

View File

@ -1,5 +1,5 @@
use nu_engine::command_prelude::*; use nu_engine::command_prelude::*;
use nu_protocol::{ast::PathMember, casing::Casing}; use nu_protocol::{DeprecationEntry, DeprecationType, ReportMode, ast::PathMember, casing::Casing};
use std::{cmp::Reverse, collections::HashSet}; use std::{cmp::Reverse, collections::HashSet};
#[derive(Clone)] #[derive(Clone)]
@ -17,9 +17,10 @@ impl Command for Reject {
(Type::table(), Type::table()), (Type::table(), Type::table()),
(Type::list(Type::Any), Type::list(Type::Any)), (Type::list(Type::Any), Type::list(Type::Any)),
]) ])
.switch("optional", "make all cell path members optional", Some('o'))
.switch( .switch(
"ignore-errors", "ignore-errors",
"ignore missing data (make all cell path members optional)", "ignore missing data (make all cell path members optional) (deprecated)",
Some('i'), Some('i'),
) )
.rest( .rest(
@ -90,8 +91,9 @@ impl Command for Reject {
} }
let span = call.head; let span = call.head;
let ignore_errors = call.has_flag(engine_state, stack, "ignore-errors")?; let optional = call.has_flag(engine_state, stack, "optional")?
if ignore_errors { || call.has_flag(engine_state, stack, "ignore-errors")?;
if optional {
for cell_path in &mut new_columns { for cell_path in &mut new_columns {
cell_path.make_optional(); cell_path.make_optional();
} }
@ -100,6 +102,19 @@ impl Command for Reject {
reject(engine_state, span, input, new_columns) reject(engine_state, span, input, new_columns)
} }
fn deprecation_info(&self) -> Vec<DeprecationEntry> {
vec![DeprecationEntry {
ty: DeprecationType::Flag("ignore-errors".into()),
report_mode: ReportMode::FirstUse,
since: Some("0.106.0".into()),
expected_removal: None,
help: Some(
"This flag has been renamed to `--optional (-o)` to better reflect its behavior."
.into(),
),
}]
}
fn examples(&self) -> Vec<Example> { fn examples(&self) -> Vec<Example> {
vec![ vec![
Example { Example {

View File

@ -1,5 +1,8 @@
use nu_engine::command_prelude::*; use nu_engine::command_prelude::*;
use nu_protocol::{PipelineIterator, ast::PathMember, casing::Casing}; use nu_protocol::{
DeprecationEntry, DeprecationType, PipelineIterator, ReportMode, ast::PathMember,
casing::Casing,
};
use std::collections::BTreeSet; use std::collections::BTreeSet;
#[derive(Clone)] #[derive(Clone)]
@ -18,9 +21,14 @@ impl Command for Select {
(Type::table(), Type::table()), (Type::table(), Type::table()),
(Type::List(Box::new(Type::Any)), Type::Any), (Type::List(Box::new(Type::Any)), Type::Any),
]) ])
.switch(
"optional",
"make all cell path members optional (returns `null` for missing values)",
Some('o'),
)
.switch( .switch(
"ignore-errors", "ignore-errors",
"ignore missing data (make all cell path members optional)", "ignore missing data (make all cell path members optional) (deprecated)",
Some('i'), Some('i'),
) )
.rest( .rest(
@ -100,10 +108,11 @@ produce a table, a list will produce a list, and a record will produce a record.
} }
} }
} }
let ignore_errors = call.has_flag(engine_state, stack, "ignore-errors")?; let optional = call.has_flag(engine_state, stack, "optional")?
|| call.has_flag(engine_state, stack, "ignore-errors")?;
let span = call.head; let span = call.head;
if ignore_errors { if optional {
for cell_path in &mut new_columns { for cell_path in &mut new_columns {
cell_path.make_optional(); cell_path.make_optional();
} }
@ -112,6 +121,19 @@ produce a table, a list will produce a list, and a record will produce a record.
select(engine_state, span, new_columns, input) select(engine_state, span, new_columns, input)
} }
fn deprecation_info(&self) -> Vec<DeprecationEntry> {
vec![DeprecationEntry {
ty: DeprecationType::Flag("ignore-errors".into()),
report_mode: ReportMode::FirstUse,
since: Some("0.106.0".into()),
expected_removal: None,
help: Some(
"This flag has been renamed to `--optional (-o)` to better reflect its behavior."
.into(),
),
}]
}
fn examples(&self) -> Vec<Example> { fn examples(&self) -> Vec<Example> {
vec![ vec![
Example { Example {

View File

@ -31,7 +31,7 @@ pub fn boolean_fold(
let mut closure = ClosureEval::new(engine_state, stack, closure); let mut closure = ClosureEval::new(engine_state, stack, closure);
for value in input { for value in input {
engine_state.signals().check(head)?; engine_state.signals().check(&head)?;
let pred = closure.run_with_value(value)?.into_value(head)?.is_true(); let pred = closure.run_with_value(value)?.into_value(head)?.is_true();
if pred == accumulator { if pred == accumulator {

View File

@ -43,10 +43,7 @@ Row conditions cannot be stored in a variable. To pass a condition with a variab
]) ])
.required( .required(
"condition", "condition",
SyntaxShape::OneOf(vec![ SyntaxShape::RowCondition,
SyntaxShape::RowCondition,
SyntaxShape::Closure(Some(vec![SyntaxShape::Any])),
]),
"Filter row condition or closure.", "Filter row condition or closure.",
) )
.allow_variants_without_examples(true) .allow_variants_without_examples(true)

View File

@ -203,6 +203,7 @@ mod test {
use super::*; use super::*;
use crate::Reject;
use crate::{Metadata, MetadataSet}; use crate::{Metadata, MetadataSet};
#[test] #[test]
@ -221,6 +222,7 @@ mod test {
working_set.add_decl(Box::new(FromCsv {})); working_set.add_decl(Box::new(FromCsv {}));
working_set.add_decl(Box::new(Metadata {})); working_set.add_decl(Box::new(Metadata {}));
working_set.add_decl(Box::new(MetadataSet {})); working_set.add_decl(Box::new(MetadataSet {}));
working_set.add_decl(Box::new(Reject {}));
working_set.render() working_set.render()
}; };
@ -229,7 +231,7 @@ mod test {
.merge_delta(delta) .merge_delta(delta)
.expect("Error merging delta"); .expect("Error merging delta");
let cmd = r#""a,b\n1,2" | metadata set --content-type 'text/csv' --datasource-ls | from csv | metadata | $in"#; let cmd = r#""a,b\n1,2" | metadata set --content-type 'text/csv' --datasource-ls | from csv | metadata | reject span | $in"#;
let result = eval_pipeline_without_terminal_expression( let result = eval_pipeline_without_terminal_expression(
cmd, cmd,
std::env::temp_dir().as_ref(), std::env::temp_dir().as_ref(),

View File

@ -248,6 +248,7 @@ fn convert_string_to_value_strict(string_input: &str, span: Span) -> Result<Valu
mod test { mod test {
use nu_cmd_lang::eval_pipeline_without_terminal_expression; use nu_cmd_lang::eval_pipeline_without_terminal_expression;
use crate::Reject;
use crate::{Metadata, MetadataSet}; use crate::{Metadata, MetadataSet};
use super::*; use super::*;
@ -268,6 +269,7 @@ mod test {
working_set.add_decl(Box::new(FromJson {})); working_set.add_decl(Box::new(FromJson {}));
working_set.add_decl(Box::new(Metadata {})); working_set.add_decl(Box::new(Metadata {}));
working_set.add_decl(Box::new(MetadataSet {})); working_set.add_decl(Box::new(MetadataSet {}));
working_set.add_decl(Box::new(Reject {}));
working_set.render() working_set.render()
}; };
@ -276,7 +278,7 @@ mod test {
.merge_delta(delta) .merge_delta(delta)
.expect("Error merging delta"); .expect("Error merging delta");
let cmd = r#"'{"a":1,"b":2}' | metadata set --content-type 'application/json' --datasource-ls | from json | metadata | $in"#; let cmd = r#"'{"a":1,"b":2}' | metadata set --content-type 'application/json' --datasource-ls | from json | metadata | reject span | $in"#;
let result = eval_pipeline_without_terminal_expression( let result = eval_pipeline_without_terminal_expression(
cmd, cmd,
std::env::temp_dir().as_ref(), std::env::temp_dir().as_ref(),

View File

@ -212,7 +212,7 @@ impl From<ReadError> for ShellError {
}, },
ReadError::TypeMismatch(marker, span) => ShellError::GenericError { ReadError::TypeMismatch(marker, span) => ShellError::GenericError {
error: "Invalid marker while reading MessagePack data".into(), error: "Invalid marker while reading MessagePack data".into(),
msg: format!("unexpected {:?} in data", marker), msg: format!("unexpected {marker:?} in data"),
span: Some(span), span: Some(span),
help: None, help: None,
inner: vec![], inner: vec![],
@ -514,6 +514,7 @@ fn assert_eof(input: &mut impl io::Read, span: Span) -> Result<(), ShellError> {
mod test { mod test {
use nu_cmd_lang::eval_pipeline_without_terminal_expression; use nu_cmd_lang::eval_pipeline_without_terminal_expression;
use crate::Reject;
use crate::{Metadata, MetadataSet, ToMsgpack}; use crate::{Metadata, MetadataSet, ToMsgpack};
use super::*; use super::*;
@ -535,6 +536,7 @@ mod test {
working_set.add_decl(Box::new(FromMsgpack {})); working_set.add_decl(Box::new(FromMsgpack {}));
working_set.add_decl(Box::new(Metadata {})); working_set.add_decl(Box::new(Metadata {}));
working_set.add_decl(Box::new(MetadataSet {})); working_set.add_decl(Box::new(MetadataSet {}));
working_set.add_decl(Box::new(Reject {}));
working_set.render() working_set.render()
}; };
@ -543,7 +545,7 @@ mod test {
.merge_delta(delta) .merge_delta(delta)
.expect("Error merging delta"); .expect("Error merging delta");
let cmd = r#"{a: 1 b: 2} | to msgpack | metadata set --datasource-ls | from msgpack | metadata | $in"#; let cmd = r#"{a: 1 b: 2} | to msgpack | metadata set --datasource-ls | from msgpack | metadata | reject span | $in"#;
let result = eval_pipeline_without_terminal_expression( let result = eval_pipeline_without_terminal_expression(
cmd, cmd,
std::env::temp_dir().as_ref(), std::env::temp_dir().as_ref(),

View File

@ -74,6 +74,7 @@ impl Command for FromNuon {
mod test { mod test {
use nu_cmd_lang::eval_pipeline_without_terminal_expression; use nu_cmd_lang::eval_pipeline_without_terminal_expression;
use crate::Reject;
use crate::{Metadata, MetadataSet}; use crate::{Metadata, MetadataSet};
use super::*; use super::*;
@ -94,6 +95,7 @@ mod test {
working_set.add_decl(Box::new(FromNuon {})); working_set.add_decl(Box::new(FromNuon {}));
working_set.add_decl(Box::new(Metadata {})); working_set.add_decl(Box::new(Metadata {}));
working_set.add_decl(Box::new(MetadataSet {})); working_set.add_decl(Box::new(MetadataSet {}));
working_set.add_decl(Box::new(Reject {}));
working_set.render() working_set.render()
}; };
@ -102,7 +104,7 @@ mod test {
.merge_delta(delta) .merge_delta(delta)
.expect("Error merging delta"); .expect("Error merging delta");
let cmd = r#"'[[a, b]; [1, 2]]' | metadata set --content-type 'application/x-nuon' --datasource-ls | from nuon | metadata | $in"#; let cmd = r#"'[[a, b]; [1, 2]]' | metadata set --content-type 'application/x-nuon' --datasource-ls | from nuon | metadata | reject span | $in"#;
let result = eval_pipeline_without_terminal_expression( let result = eval_pipeline_without_terminal_expression(
cmd, cmd,
std::env::temp_dir().as_ref(), std::env::temp_dir().as_ref(),

View File

@ -167,7 +167,7 @@ fn parse_aligned_columns<'a>(
let headers: Vec<(String, usize)> = indices let headers: Vec<(String, usize)> = indices
.iter() .iter()
.enumerate() .enumerate()
.map(|(i, position)| (format!("column{}", i), *position)) .map(|(i, position)| (format!("column{i}"), *position))
.collect(); .collect();
construct(ls.iter().map(|s| s.to_owned()), headers) construct(ls.iter().map(|s| s.to_owned()), headers)

View File

@ -145,6 +145,7 @@ pub fn convert_string_to_value(string_input: String, span: Span) -> Result<Value
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use crate::Reject;
use crate::{Metadata, MetadataSet}; use crate::{Metadata, MetadataSet};
use super::*; use super::*;
@ -345,6 +346,7 @@ mod tests {
working_set.add_decl(Box::new(FromToml {})); working_set.add_decl(Box::new(FromToml {}));
working_set.add_decl(Box::new(Metadata {})); working_set.add_decl(Box::new(Metadata {}));
working_set.add_decl(Box::new(MetadataSet {})); working_set.add_decl(Box::new(MetadataSet {}));
working_set.add_decl(Box::new(Reject {}));
working_set.render() working_set.render()
}; };
@ -353,7 +355,7 @@ mod tests {
.merge_delta(delta) .merge_delta(delta)
.expect("Error merging delta"); .expect("Error merging delta");
let cmd = r#""[a]\nb = 1\nc = 1" | metadata set --content-type 'text/x-toml' --datasource-ls | from toml | metadata | $in"#; let cmd = r#""[a]\nb = 1\nc = 1" | metadata set --content-type 'text/x-toml' --datasource-ls | from toml | metadata | reject span | $in"#;
let result = eval_pipeline_without_terminal_expression( let result = eval_pipeline_without_terminal_expression(
cmd, cmd,
std::env::temp_dir().as_ref(), std::env::temp_dir().as_ref(),

View File

@ -160,6 +160,7 @@ fn from_tsv(
mod test { mod test {
use nu_cmd_lang::eval_pipeline_without_terminal_expression; use nu_cmd_lang::eval_pipeline_without_terminal_expression;
use crate::Reject;
use crate::{Metadata, MetadataSet}; use crate::{Metadata, MetadataSet};
use super::*; use super::*;
@ -180,6 +181,7 @@ mod test {
working_set.add_decl(Box::new(FromTsv {})); working_set.add_decl(Box::new(FromTsv {}));
working_set.add_decl(Box::new(Metadata {})); working_set.add_decl(Box::new(Metadata {}));
working_set.add_decl(Box::new(MetadataSet {})); working_set.add_decl(Box::new(MetadataSet {}));
working_set.add_decl(Box::new(Reject {}));
working_set.render() working_set.render()
}; };
@ -188,7 +190,7 @@ mod test {
.merge_delta(delta) .merge_delta(delta)
.expect("Error merging delta"); .expect("Error merging delta");
let cmd = r#""a\tb\n1\t2" | metadata set --content-type 'text/tab-separated-values' --datasource-ls | from tsv | metadata | $in"#; let cmd = r#""a\tb\n1\t2" | metadata set --content-type 'text/tab-separated-values' --datasource-ls | from tsv | metadata | reject span | $in"#;
let result = eval_pipeline_without_terminal_expression( let result = eval_pipeline_without_terminal_expression(
cmd, cmd,
std::env::temp_dir().as_ref(), std::env::temp_dir().as_ref(),

View File

@ -252,7 +252,7 @@ fn process_xml_parse_error(source: String, err: roxmltree::Error, span: Span) ->
pos, pos,
), ),
roxmltree::Error::UnknownNamespace(prefix, pos) => { roxmltree::Error::UnknownNamespace(prefix, pos) => {
make_xml_error_spanned(format!("Unknown prefix {}", prefix), source, pos) make_xml_error_spanned(format!("Unknown prefix {prefix}"), source, pos)
} }
roxmltree::Error::UnexpectedCloseTag(expected, actual, pos) => make_xml_error_spanned( roxmltree::Error::UnexpectedCloseTag(expected, actual, pos) => make_xml_error_spanned(
format!("Unexpected close tag {actual}, expected {expected}"), format!("Unexpected close tag {actual}, expected {expected}"),
@ -370,6 +370,7 @@ fn make_xml_error_spanned(msg: impl Into<String>, src: String, pos: TextPos) ->
mod tests { mod tests {
use crate::Metadata; use crate::Metadata;
use crate::MetadataSet; use crate::MetadataSet;
use crate::Reject;
use super::*; use super::*;
@ -541,6 +542,7 @@ mod tests {
working_set.add_decl(Box::new(FromXml {})); working_set.add_decl(Box::new(FromXml {}));
working_set.add_decl(Box::new(Metadata {})); working_set.add_decl(Box::new(Metadata {}));
working_set.add_decl(Box::new(MetadataSet {})); working_set.add_decl(Box::new(MetadataSet {}));
working_set.add_decl(Box::new(Reject {}));
working_set.render() working_set.render()
}; };
@ -552,7 +554,7 @@ mod tests {
let cmd = r#"'<?xml version="1.0" encoding="UTF-8"?> let cmd = r#"'<?xml version="1.0" encoding="UTF-8"?>
<note> <note>
<remember>Event</remember> <remember>Event</remember>
</note>' | metadata set --content-type 'application/xml' --datasource-ls | from xml | metadata | $in"#; </note>' | metadata set --content-type 'application/xml' --datasource-ls | from xml | metadata | reject span | $in"#;
let result = eval_pipeline_without_terminal_expression( let result = eval_pipeline_without_terminal_expression(
cmd, cmd,
std::env::temp_dir().as_ref(), std::env::temp_dir().as_ref(),

View File

@ -158,27 +158,26 @@ fn convert_yaml_value_to_nu_value(
} }
serde_yaml::Value::Tagged(t) => { serde_yaml::Value::Tagged(t) => {
let tag = &t.tag; let tag = &t.tag;
let value = match &t.value {
match &t.value {
serde_yaml::Value::String(s) => { serde_yaml::Value::String(s) => {
let val = format!("{} {}", tag, s).trim().to_string(); let val = format!("{tag} {s}").trim().to_string();
Value::string(val, span) Value::string(val, span)
} }
serde_yaml::Value::Number(n) => { serde_yaml::Value::Number(n) => {
let val = format!("{} {}", tag, n).trim().to_string(); let val = format!("{tag} {n}").trim().to_string();
Value::string(val, span) Value::string(val, span)
} }
serde_yaml::Value::Bool(b) => { serde_yaml::Value::Bool(b) => {
let val = format!("{} {}", tag, b).trim().to_string(); let val = format!("{tag} {b}").trim().to_string();
Value::string(val, span) Value::string(val, span)
} }
serde_yaml::Value::Null => { serde_yaml::Value::Null => {
let val = format!("{}", tag).trim().to_string(); let val = format!("{tag}").trim().to_string();
Value::string(val, span) Value::string(val, span)
} }
v => convert_yaml_value_to_nu_value(v, span, val_span)?, v => convert_yaml_value_to_nu_value(v, span, val_span)?,
}; }
value
} }
serde_yaml::Value::Null => Value::nothing(span), serde_yaml::Value::Null => Value::nothing(span),
x => unimplemented!("Unsupported YAML case: {:?}", x), x => unimplemented!("Unsupported YAML case: {:?}", x),
@ -244,6 +243,7 @@ fn from_yaml(input: PipelineData, head: Span) -> Result<PipelineData, ShellError
#[cfg(test)] #[cfg(test)]
mod test { mod test {
use crate::Reject;
use crate::{Metadata, MetadataSet}; use crate::{Metadata, MetadataSet};
use super::*; use super::*;
@ -410,6 +410,7 @@ mod test {
working_set.add_decl(Box::new(FromYaml {})); working_set.add_decl(Box::new(FromYaml {}));
working_set.add_decl(Box::new(Metadata {})); working_set.add_decl(Box::new(Metadata {}));
working_set.add_decl(Box::new(MetadataSet {})); working_set.add_decl(Box::new(MetadataSet {}));
working_set.add_decl(Box::new(Reject {}));
working_set.render() working_set.render()
}; };
@ -418,7 +419,7 @@ mod test {
.merge_delta(delta) .merge_delta(delta)
.expect("Error merging delta"); .expect("Error merging delta");
let cmd = r#""a: 1\nb: 2" | metadata set --content-type 'application/yaml' --datasource-ls | from yaml | metadata | $in"#; let cmd = r#""a: 1\nb: 2" | metadata set --content-type 'application/yaml' --datasource-ls | from yaml | metadata | reject span | $in"#;
let result = eval_pipeline_without_terminal_expression( let result = eval_pipeline_without_terminal_expression(
cmd, cmd,
std::env::temp_dir().as_ref(), std::env::temp_dir().as_ref(),

View File

@ -166,14 +166,14 @@ mod test {
.merge_delta(delta) .merge_delta(delta)
.expect("Error merging delta"); .expect("Error merging delta");
let cmd = "{a: 1 b: 2} | to csv | metadata | get content_type"; let cmd = "{a: 1 b: 2} | to csv | metadata | get content_type | $in";
let result = eval_pipeline_without_terminal_expression( let result = eval_pipeline_without_terminal_expression(
cmd, cmd,
std::env::temp_dir().as_ref(), std::env::temp_dir().as_ref(),
&mut engine_state, &mut engine_state,
); );
assert_eq!( assert_eq!(
Value::test_record(record!("content_type" => Value::test_string("text/csv"))), Value::test_string("text/csv"),
result.expect("There should be a result") result.expect("There should be a result")
); );
} }

View File

@ -50,7 +50,7 @@ fn make_unsupported_input_error(
) -> ShellError { ) -> ShellError {
ShellError::UnsupportedInput { ShellError::UnsupportedInput {
msg: "expected table or record".to_string(), msg: "expected table or record".to_string(),
input: format!("input type: {}", r#type), input: format!("input type: {type}"),
msg_span: head, msg_span: head,
input_span: span, input_span: span,
} }

View File

@ -229,14 +229,14 @@ mod test {
.merge_delta(delta) .merge_delta(delta)
.expect("Error merging delta"); .expect("Error merging delta");
let cmd = "{a: 1 b: 2} | to json | metadata | get content_type"; let cmd = "{a: 1 b: 2} | to json | metadata | get content_type | $in";
let result = eval_pipeline_without_terminal_expression( let result = eval_pipeline_without_terminal_expression(
cmd, cmd,
std::env::temp_dir().as_ref(), std::env::temp_dir().as_ref(),
&mut engine_state, &mut engine_state,
); );
assert_eq!( assert_eq!(
Value::test_record(record!("content_type" => Value::test_string("application/json"))), Value::test_string("application/json"),
result.expect("There should be a result") result.expect("There should be a result")
); );
} }

Some files were not shown because too many files have changed in this diff Show More