Compare commits

..

261 Commits

Author SHA1 Message Date
fb26109049 Bump version for 0.101.0 release (#14631)
It's palindromic!
2024-12-22 15:10:19 +01:00
d99905b604 Make the --no-newline test use --no-config-file as well (#14654)
Just a quick change: the test I made for `--no-newline` was missing
`--no-config-file`, so it could false-negative if you have problems with
your config.
2024-12-22 13:55:03 +01:00
a8890d5cca Fix potential panic in ls (#14655)
# Description

Fixes a potential panic in `ls`.

# User-Facing Changes

Entries in the same directory are sorted first based on whether or not
they errored. Errors will be listed first, potentially stopping the
pipeline short.
2024-12-21 15:09:46 -08:00
5139054325 Pin reedline to 0.38.0 release (#14651) 2024-12-21 20:11:22 +01:00
039d0a685a Fix the document CI error for polars profile command (#14642)
# Description

Fix the docs repo CI build error here:
https://github.com/nushell/nushell.github.io/actions/runs/12425087184/job/34691291790#step:5:18

The doc generated by `make_docs.nu` for `polars profile` command will
make the CI build fail due to the indention error of markdown front
matters. I used to fix it manually before, for the long run, it's better
to fix it from the source code.
2024-12-20 13:47:02 +01:00
e0685315b4 tweaks to config flatten (#14639)
# Description

@maxim-uvarov found some bugs in the new `config flatten` command. This
PR should take care of what's been identified so far.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-12-19 17:26:21 -06:00
02fc844e40 Fix commands::network::http::*::*_timeout tests on non-english system (#14640)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
I had issues with the following tests:
- `commands::network::http::delete::http_delete_timeout`
- `commands::network::http::get::http_get_timeout`
- `commands::network::http::options::http_options_timeout`
- `commands::network::http::patch::http_patch_timeout`
- `commands::network::http::post::http_post_timeout`
- `commands::network::http::put::http_put_timeout`

I checked what the actual issue was and my problem was that the tested
string `"did not properly respond after a period of time"` wasn't in the
actual error. This happened because my german Windows would return a
german error message which obviosly did not include that string. To fix
that I replaced the string check with the os error code that is also
part of the error message which should be language agnostic. (I hope.)

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

None.

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

\o/

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-12-19 17:15:27 -06:00
b48f50f018 Remove unused sample_login.nu file (#14632)
This file is not made accessible to the user through any of our `config`
commands.
Thus I discussed with Douglas to delete it, to ensure it doesn't go out
of date (the version added with #14601 was not yet part of the bumping
script)

All the necessary information on how to setup a `login.nu` file is
provided in the website documentation
2024-12-19 20:21:52 +01:00
dc0ac8e917 Remove pub on some command internals (#14636)
Stumbled over unnecessary `pub` `fn action` and `struct Arguments` when
reworking `into bits` in #14634

Stuff like this should be local until proven otherwise and then named
approrpiately.
2024-12-19 19:42:18 +01:00
f2e8c391a2 lookup closures/blockids and get content in config flatten (#14635)
# Description

This PR continues to tweak `config flatten` by looking up the closures
and block_ids and extracts the content into the produced record.

Example

![image](https://github.com/user-attachments/assets/99a9db54-e477-40b2-8468-bbadcf0aa5b7)


# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-12-19 08:43:49 -06:00
7029d24f42 Add version info to startup banner (#14625)
# Description

Adds version info to the Startup Banner

# User-Facing Changes

## Before


![image](https://github.com/user-attachments/assets/de2a415f-1608-4d87-ab28-f3238cf532c3)

## After


![image](https://github.com/user-attachments/assets/db3f8419-0680-4a0b-9f09-8d9a273c4726)

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
- 
# After Submitting

N/A
2024-12-19 08:38:29 -06:00
4e8289d7bb Set split-by doc category to "deprecated" (#14633)
# Description

#14019 deprecated the `split-by` command. This sets its doc-category to
"deprecated" so that it will display that way in the in-shell and online
help

# User-Facing Changes

`split-by` will now show as a deprecated command in Help. Will also be
reported using:

```nushell
help commands | where category == deprecated
```

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting

N/A
2024-12-19 08:34:06 -06:00
bf8763fc11 Add shape_garbage (#14626)
# Description

Adds `$env.config.color_config.shape_garbage` to the default config so
that it is populated out of the box.

Thanks to @PerchunPak for finding that it was missing.

# User-Facing Changes

I think this is useful on two levels, but it will be a change for a lot
of users:

1. Accessing it won't generate an error out-of-the-box
2. Garbage errors are highlighted in reverse-red in real-time in the
REPL. This means that, for example, typing just a `$` will start out as
an error - Once a valid variable (e.g., `$env`) is completed, then the
highlight will change to the parsed shape.

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting

N/A
2024-12-18 19:43:26 -06:00
11375c19d2 better error handling for view source (#14624)
# Description

There is an opportunity to give a bogus block id to view source. This
makes it more resilient and not panic when an invalid block id is passed
in.


![image](https://github.com/user-attachments/assets/67ebbffc-be57-4ce3-8700-90f1ed080f9b)


# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-12-18 16:19:49 -06:00
8f4feeb119 add config flatten command (#14621)
# Description

This is supposed to be a Quality-of-Life command that just makes some
things easier when dealing with a nushell config. Really all it does is
show you the current config in a flattened state. That's it. I was
thinking this could be useful when comparing config settings between old
and new config files. There are still room for improvements. For
instance, closures are listed as an int. They can be updated with a
`view source <int>` pipeline but that could all be built in too.


![image](https://github.com/user-attachments/assets/5d8981a3-8d03-4eb3-8361-2f3c3c560660)

The command works by getting the current configuration, serializing it
to json, then flattening that json. BTW, there's a new flatten_json.rs
in nu-utils. Theoretically all this mess could be done in a custom
command script, but it's proven to be exceedingly difficult based on the
work from discord.

Here's some more complex items to flatten.

![image](https://github.com/user-attachments/assets/b44e2ec8-cf17-41c4-bf8d-7f26317db071)

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-12-18 15:50:16 -06:00
e26364f885 Remove -a/-all flag in du. (#14618)
Just noticed that I forget to remove `-a/-all` flag in `du`'s signature
in #14407

This pr is going to remove it
2024-12-18 10:45:54 -06:00
fff0c6e2cb update shadow-rs to 0.37 (#14617) 2024-12-18 23:09:50 +08:00
68c2729991 add view blocks command (#14610)
# Description

This PR is meant to add another nushell introspection/debug command,
`view blocks`. This command shows what is in the EngineState's memory
that is parsed and stored as blocks. Blocks may continue to grow as you
use the repl.
 

![image](https://github.com/user-attachments/assets/8a19fd56-ef15-4993-9700-a51eb8eaec7f)

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-12-18 06:41:50 -06:00
8127b5dd24 Add merge deep command (#14525)
# Description
This PR adds the `merge deep` command. This allows you to merge nested
records and tables/lists within records together, instead of overwriting
them. The code for `merge` was reworked to support more general merging
of values, so `merge` and `merge deep` use the same underlying code.

`merge deep` mostly works like `merge`, except it recurses into inner
records which exist in both the input and argument rather than just
overwriting. For lists and by extension tables, `merge deep` has a
couple different strategies for merging inner lists, which can be
selected with the `--strategy` flag. These are:

- `table`: Merges tables element-wise, similarly to the merge command.
Non-table lists are not merged.
- `overwrite`: Lists and tables are overwritten with their corresponding
value from the argument, similarly to scalars.
- `append`: Lists and tables in the input are appended with the
corresponding list from the argument.
- `prepend`: Lists and tables in the input are prepended with the
corresponding list from the argument.

This can also be used with the new config changes to write a monolithic
record of _only_ the config values you want to change:
```nushell
# in config file:
const overrides = {
  history: {
    file_format: "sqlite",
    isolation: true
  }
}
# use append strategy for lists, e.g., menus keybindings
$env.config = $env.config | merge deep --strategy=append $overrides

# later, in REPL:
$env.config.history
# => ╭───────────────┬────────╮
# => │ max_size      │ 100000 │
# => │ sync_on_enter │ true   │
# => │ file_format   │ sqlite │
# => │ isolation     │ true   │
# => ╰───────────────┴────────╯
```

<details>
<summary>Performance details</summary>
For those interested, there was less than one standard deviation of
difference in startup time when setting each config item individually
versus using <code>merge deep</code>, so you can use <code>merge
deep</code> in your config at no measurable performance cost. Here's my
results:

My normal config (in 0.101 style, with each `$env.config.[...]` value
updated individually)
```nushell
bench --pretty { ./nu -l -c '' }
# => 45ms 976µs 983ns +/- 455µs 955ns
```

Equivalent config with a single `overrides` record and `merge deep -s
append`:
```nushell
bench --pretty { ./nu -l -c '' }
# => 45ms 587µs 428ns +/- 702µs 944ns
```

</details>

Huge thanks to @Bahex for designing the strategies API and helping
finish up this PR while I was sick ❤️

Related:  #12148

# User-Facing Changes

Adds the `merge deep` command to recursively merge records. For example:

```nushell
{a: {foo: 123 bar: "overwrite me"}, b: [1, 2, 3]} | merge deep {a: {bar: 456, baz: 789}, b: [4, 5, 6]}
# => ╭───┬───────────────╮
# => │   │ ╭─────┬─────╮ │
# => │ a │ │ foo │ 123 │ │
# => │   │ │ bar │ 456 │ │
# => │   │ │ baz │ 789 │ │
# => │   │ ╰─────┴─────╯ │
# => │   │ ╭───┬───╮     │
# => │ b │ │ 0 │ 4 │     │
# => │   │ │ 1 │ 5 │     │
# => │   │ │ 2 │ 6 │     │
# => │   │ ╰───┴───╯     │
# => ╰───┴───────────────╯
```

`merge deep` also has different strategies for merging inner lists and
tables. For example, you can use the `append` strategy to _merge_ the
inner `b` list instead of overwriting it.

```nushell
{a: {foo: 123 bar: "overwrite me"}, b: [1, 2, 3]} | merge deep --strategy=append {a: {bar: 456, baz: 789}, b: [4, 5, 6]}
# => ╭───┬───────────────╮
# => │   │ ╭─────┬─────╮ │
# => │ a │ │ foo │ 123 │ │
# => │   │ │ bar │ 456 │ │
# => │   │ │ baz │ 789 │ │
# => │   │ ╰─────┴─────╯ │
# => │   │ ╭───┬───╮     │
# => │ b │ │ 0 │ 1 │     │
# => │   │ │ 1 │ 2 │     │
# => │   │ │ 2 │ 3 │     │
# => │   │ │ 3 │ 4 │     │
# => │   │ │ 4 │ 5 │     │
# => │   │ │ 5 │ 6 │     │
# => │   │ ╰───┴───╯     │
# => ╰───┴───────────────╯
```

**Note to release notes writers**: Please credit @Bahex for this PR as
well 😄

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

Added tests for deep merge

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
N/A

---------

Co-authored-by: Bahex <bahey1999@gmail.com>
2024-12-18 06:36:04 -06:00
a9caa61ef9 Bump crate-ci/typos from 1.28.2 to 1.28.4 (#14614)
Bumps [crate-ci/typos](https://github.com/crate-ci/typos) from 1.28.2 to
1.28.4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/crate-ci/typos/releases">crate-ci/typos's
releases</a>.</em></p>
<blockquote>
<h2>v1.28.4</h2>
<h2>[1.28.4] - 2024-12-16</h2>
<h3>Features</h3>
<ul>
<li><code>--format sarif</code> support</li>
</ul>
<h2>v1.28.3</h2>
<h2>[1.28.3] - 2024-12-12</h2>
<h3>Fixes</h3>
<ul>
<li>Correct <code>imlementations</code>, <code>includs</code>,
<code>qurorum</code>, <code>transatctions</code>,
<code>trasnactions</code>, <code>validasted</code>,
<code>vview</code></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/crate-ci/typos/blob/master/CHANGELOG.md">crate-ci/typos's
changelog</a>.</em></p>
<blockquote>
<h2>[1.28.4] - 2024-12-16</h2>
<h3>Features</h3>
<ul>
<li><code>--format sarif</code> support</li>
</ul>
<h2>[1.28.3] - 2024-12-12</h2>
<h3>Fixes</h3>
<ul>
<li>Correct <code>imlementations</code>, <code>includs</code>,
<code>qurorum</code>, <code>transatctions</code>,
<code>trasnactions</code>, <code>validasted</code>,
<code>vview</code></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9d89015957"><code>9d89015</code></a>
chore: Release</li>
<li><a
href="6b24563a99"><code>6b24563</code></a>
chore: Release</li>
<li><a
href="bd0a2769ae"><code>bd0a276</code></a>
docs: Update changelog</li>
<li><a
href="370109dd4d"><code>370109d</code></a>
Merge pull request <a
href="https://redirect.github.com/crate-ci/typos/issues/1047">#1047</a>
from Zxilly/sarif</li>
<li><a
href="63908449a7"><code>6390844</code></a>
feat: Implement sarif format reporter</li>
<li><a
href="32b96444b9"><code>32b9644</code></a>
Merge pull request <a
href="https://redirect.github.com/crate-ci/typos/issues/1169">#1169</a>
from klensy/deps</li>
<li><a
href="720258f60b"><code>720258f</code></a>
Merge pull request <a
href="https://redirect.github.com/crate-ci/typos/issues/1176">#1176</a>
from Ghaniyyat05/master</li>
<li><a
href="a42904ad6e"><code>a42904a</code></a>
Update README.md</li>
<li><a
href="d1c850b2b5"><code>d1c850b</code></a>
chore: Release</li>
<li><a
href="a491fd56c0"><code>a491fd5</code></a>
chore: Release</li>
<li>Additional commits viewable in <a
href="https://github.com/crate-ci/typos/compare/v1.28.2...v1.28.4">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=crate-ci/typos&package-manager=github_actions&previous-version=1.28.2&new-version=1.28.4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-18 10:42:28 +08:00
99fe866d12 Remove duplicate version line (#14611)
# Description

Fixed:

* `version = "0.100.1"` line got duplicated during merge conflict
resolution - Found while updating `bump_version.nu`.

# User-Facing Changes

N/A

# Tests + Formatting

TODO

# After Submitting

N/A
2024-12-17 16:08:20 -06:00
c0ad659985 Doc file fixes (#14608)
# Description

With great thanks to @fdncred and especially @PerchunPak (see #14601)
for finding and fixing a number of issues that I pulled in here due to
the filename changes and upcoming freeze.

This PR primarily fixes a poor wording choice in the new filenames and
`config` command options. The fact that these were called
`sample_config.nu` (etc.) and accessed via `config --sample` created a
great deal of confusion. These were never intended to be used as-is as
config files, but rather as in-shell documentation.

As such, I've renamed them:

* `sample_config.nu` becomes `doc_config.nu`
* `sample_env.nu` becomes `doc_env.nu`
* `config nu --sample` becomes `config nu --doc`
* `config env --sample` because `config env --doc`

Also the following:

* Updates `doc_config.nu` with a few additional comment-fixes on top of
@PerchunPak's changes.
* Adds version numbers to all files - Will need to update the version
script to add some files after this PR.
* Additional doc on plugin and plugin_gc configuration which I had
failed to previously completely update from the older wording
* Updated the comments in the `scaffold_*.nu` files to point people to
`help config`/`help nu` so that, if things change in the future, it will
become more difficult for the comments to be outdated.
* 

# User-Facing Changes

Mostly doc.

`config nu` and `config env` changes update new behavior previously
added in 0.100.1

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting

* Update configuration chapter of doc
* Update the blog entry on migrating config
* Update `bump-version.nu`
2024-12-17 14:18:23 -06:00
f41c53fef1 allow view source to take int as a parameter (#14609)
# Description

This PR allows the `view source` command to view source based on an int
value. I wrote this specifically to be able to see closures where the
text is hidden. For example:

![image](https://github.com/user-attachments/assets/d8fe2692-0951-4366-9cb9-55f20044b68a)

And then you can use those `<Closure #>` with the `view source` command
like this.

![image](https://github.com/user-attachments/assets/f428c8ad-56a9-4e72-880e-e32fb9155531)


# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-12-17 13:15:16 -06:00
981a000ee8 Added flag --coalesce-columns to allow columns to be coalsced on full joins (#14578)
- fixes #14572

# Description
This allowed columns to be coalesced on full joins with `polars join`,
providing functionality simlar to the old `--outer` join behavior.

# User-Facing Changes
- Provides a new flag `--coalesce-columns` on the `polars join` command
2024-12-17 09:55:42 -08:00
cc4da104e0 Fix issues in the example configs (#14601)
For some reason, it had multiple syntax errors and other issues, like
undefined options. Would be great to add a test for sourcing all example
configs, but I don't know rust

See also
https://github.com/nushell/nushell/pull/14249#discussion_r1887192408

<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

CC @NotTheDr01ds
2024-12-17 10:43:25 -06:00
c266e6adaf test(path self): Add tests (#14607)
# Description
Add tests for `path self`.

I wasn't very familiar with the code base, especially the testing
utilities, when I first implemented `path self`. It's been on my mind to
add tests for it since then.
2024-12-17 17:01:23 +01:00
d94b344342 Revert "For # to start a comment, then it either need to be the first chara…" (#14606)
Reverts nushell/nushell#14562 due to https://github.com/nushell/nushell/issues/14605
2024-12-17 06:26:56 -06:00
6367fb6e9e Add missing color_config settings (#14603)
# Description

Fixes #14600 by adding a default value for missing keys in
`default_config.nu`:

* `$env.config.color_config.glob`
* `$env.config.color_config.closure`

# User-Facing Changes

Will no longer error when accessing these keys.

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting

N/A
2024-12-16 18:20:54 -06:00
5615d21ce9 remove content_type metadata from pipeline after from ... commands (#14602)
# Description

`from ...` conversions pass along all metadata except `content_type`,
which they set to `None`.

## Rationale

`open`ing a file results in no `content_type` metadata if it can be
parsed into a nu data structure, and using `open --raw` results in
`content_type` metadata.

`from ...` commands should preserve metadata ***except*** for
`content_type`, as after parsing it's no longer that `content_type` and
just structured nu data.

These commands should return identical data *and* identical metadata

```nushell
open foo.csv
```

```nushell
open foo.csv --raw | from csv
```

# User-Facing Changes

N/A

# Tests + Formatting
- 🟢 toolkit fmt
- 🟢 toolkit clippy
- 🟢 toolkit test
- 🟢 toolkit test stdlib

# After Submitting
N/A
2024-12-16 15:59:18 -06:00
e2c4ff8180 Revert "Feature: PWD-per-drive to facilitate working on multiple drives at Windows" (#14598)
Reverts nushell/nushell#14411
2024-12-16 13:52:07 -06:00
39770d4197 Moves additional env vars out of default_env and updates some transient prompt vars (#14579)
# Description

With `NU_LIB_DIRS`, `NU_PLUGIN_DIRS`, and `ENV_CONVERSIONS` now moved
out of `default_env.nu`, we're down to just a few left. This moves all
non-closure `PROMPT` variables out as well (and into Rust `main()`. It
also:

* Implements #14565 and sets the default
`TRANSIENT_PROMPT_COMMAND_RIGHT` and `TRANSIENT_MULTILINE_INDICATOR` to
an empty string so that they are removed for easier copying from the
terminal.
* Reverses portions of #14249 where I was overzealous in some of the
variables that were imported
* Fixes #12096 
* Will be the final fix in place, I believe, to close #13670

# User-Facing Changes

Transient prompt will now remove the right-prompt and
multiline-indicator once a commandline has been entered.

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
- 
# After Submitting

Release notes addition
2024-12-16 08:18:47 -06:00
cfdb4bbf25 std/iter scan: change closure signature to be consistent with reduce (#14596)
# Description

I noticed that `std/iter scan`'s closure has the order of parameters
reversed compared to `reduce`, so changed it to be consistent.

Also it didn't have `$acc` as `$in` like `reduce`, so fixed that as
well.

# User-Facing Changes

> [!WARNING]
> This is a breaking change for all operations where order of `$it` and
`$acc` matter.

-   This is still fine.
    ```nushell
    [1 2 3] | iter scan 0 {|x, y| $x + $y}
    ```

-   This is broken
    ```nushell
    [a b c d] | iter scan "" {|x, y| [$x, $y] | str join} -n
    ```
    and should be changed to either one of these
    -   ```nushell
        [a b c d] | iter scan "" {|it, acc| [$acc, $it] | str join} -n
        ```
    -   ```nushell
        [a b c d] | iter scan "" {|it| append $it | str join} -n
        ```

# Tests + Formatting
Only change is in the std and its tests
- 🟢 toolkit test stdlib

# After Submitting
Mention in release notes
2024-12-16 06:13:51 -06:00
3760910f0b remove the deprecated index argument from filter commands' closure signature (#14594)
# Description

A lot of filter commands that have a closure argument (`each`, `filter`,
etc), have a wrong signature for the closure, indicating an extra int
argument for the closure.

I think they are a left over from before `enumerate` was added, used to
access iteration index. None of the commands changed in this PR actually
supply this int argument.

# User-Facing Changes
N/A

# Tests + Formatting
- 🟢 toolkit fmt
- 🟢 toolkit clippy
- 🟢 toolkit test
- 🟢 toolkit test stdlib

# After Submitting
N/A
2024-12-15 15:27:13 -06:00
3c632e96f9 docs(reduce): add example demonstrating accumulator as pipeline input (#14593)
# Description
Add an example to `reduce` that shows accumulator can also be accessed
pipeline input.

# User-Facing Changes
N/A

# Tests + Formatting
- 🟢 toolkit fmt
- 🟢 toolkit clippy
- 🟢 toolkit test
- 🟢 toolkit test stdlib

# After Submitting
N/A
2024-12-15 15:26:14 -06:00
baf86dfb0e tweak polars join for better cross joins (#14586)
# Description

closes #14585

This PR tries to make `polars join --cross` work better. Example taken
from
https://docs.pola.rs/user-guide/transformations/joins/#cartesian-product

### Before
```nushell
❯ let tokens = [[monopoly_token]; [hat] [shoe] [boat]] | polars into-df
❯ let players = [[name, cash]; [Alice, 78] [Bob, 135]] | polars into-df
❯ $players | polars into-lazy | polars select (polars col name) | polars join --cross $tokens | polars collect
Error: nu::parser::missing_positional

  × Missing required positional argument.
   ╭─[entry #3:1:92]
 1 │ $players | polars into-lazy | polars select (polars col name) | polars join --cross $tokens
   ╰────
  help: Usage: polars join {flags} <other> <left_on> <right_on> . Use `--help` for more information.
```
### After
```nushell
❯ let players = [[name, cash]; [Alice, 78] [Bob, 135]] | polars into-df
❯ let tokens = [[monopoly_token]; [hat] [shoe] [boat]] | polars into-df
❯ $players | polars into-lazy | polars select (polars col name) | polars join --cross $tokens | polars collect
╭─#─┬─name──┬─monopoly_token─╮
│ 0 │ Alice │ hat            │
│ 1 │ Alice │ shoe           │
│ 2 │ Alice │ boat           │
│ 3 │ Bob   │ hat            │
│ 4 │ Bob   │ shoe           │
│ 5 │ Bob   │ boat           │
╰─#─┴─name──┴─monopoly_token─╯
```
Other examples
```nushell
❯ 1..3 | polars into-df | polars join --cross (4..6 | polars into-df)
╭─#─┬─0─┬─0_x─╮
│ 0 │ 1 │   4 │
│ 1 │ 1 │   5 │
│ 2 │ 1 │   6 │
│ 3 │ 2 │   4 │
│ 4 │ 2 │   5 │
│ 5 │ 2 │   6 │
│ 6 │ 3 │   4 │
│ 7 │ 3 │   5 │
│ 8 │ 3 │   6 │
╰─#─┴─0─┴─0_x─╯
❯ 1..3 | each {|x| {x: $x}} | polars into-df | polars join --cross (4..6 | each {|y| {y: $y}} | polars into-df) x y
╭─#─┬─x─┬─y─╮
│ 0 │ 1 │ 4 │
│ 1 │ 1 │ 5 │
│ 2 │ 1 │ 6 │
│ 3 │ 2 │ 4 │
│ 4 │ 2 │ 5 │
│ 5 │ 2 │ 6 │
│ 6 │ 3 │ 4 │
│ 7 │ 3 │ 5 │
│ 8 │ 3 │ 6 │
╰─#─┴─x─┴─y─╯
```
/cc @ayax79 
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-12-14 21:58:47 -06:00
219b44a04f Improve handling of columns with null values (#14588)
Addresses some null handling issues in #6882

# Description

This changes the implementation of guessing a column type when a schema
is not specified.

New behavior:
1. Use the first non-Value::Nothing value type for the columns data type
2. If the value type changes (ignoring Value::Nothing) in subsequent
values, the datatype will be changed to DataType::Object("Value", None)
3. If a column type does not have a value type,
DataType::Object("Value", None) will be assumed.
2024-12-14 18:36:01 -06:00
05ee7ea9c7 Revert "fix: make exec command decrement SHLVL correctly" (#14580)
Reverts nushell/nushell#14570
2024-12-13 18:34:33 -06:00
cc0616b753 return const values from scope variables (#14577)
Fixes #14542

# User-Facing Changes

Constant values are no longer missing from `scope variables` output
when the IR evaluator is enabled:

```diff
const foo = 1
scope variables | where name == "$foo" | get value.0 | to nuon
-null
+int
```
2024-12-13 16:23:17 -06:00
cbf5fa6684 For # to start a comment, then it either need to be the first chara… (#14562)
This PR should close
1. #10327 
1. #13667 
1. #13810 
1. #14129 

# Description
For `#` to start a comment, then it either need to be the first
character of the token or prefixed with ` ` (space).

So now you can do this:
``` 
~/Projects/nushell> 1..10 | each {echo test#testing }                                                                                                                     12/05/2024 05:37:19 PM
╭───┬──────────────╮
│ 0 │ test#testing │
│ 1 │ test#testing │
│ 2 │ test#testing │
│ 3 │ test#testing │
│ 4 │ test#testing │
│ 5 │ test#testing │
│ 6 │ test#testing │
│ 7 │ test#testing │
│ 8 │ test#testing │
│ 9 │ test#testing │
╰───┴──────────────╯
```  

# User-Facing Changes
It is a breaking change if anyone expected comments to start in the
middle of a string without any prefixing ` ` (space).

# Tests + Formatting
Did all: 
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

# After Submitting
I cant see that I need to update anything in [the
documentation](https://github.com/nushell/nushell.github.io) but please
point me in the direction if there is anything.
2024-12-13 07:02:07 -06:00
a7fa6d00c1 fix 64-bit hex number parsing (#14571)
# Description

Closes #14521 

This PR tweaks the way 64-bit hex numbers are parsed.

### Before
```nushell
❯ 0xffffffffffffffef
Error: nu:🐚:external_command

  × External command failed
   ╭─[entry #1:1:1]
 1 │ 0xffffffffffffffef
   · ─────────┬────────
   ·          ╰── Command `0xffffffffffffffef` not found
   ╰────
  help: `0xffffffffffffffef` is neither a Nushell built-in or a known external command
```

### After
```nushell
❯ 0xffffffffffffffef
-17
```

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-12-13 07:00:53 -06:00
49f377688a support raw strings in match patterns (#14573)
Fixes #14554

# User-Facing Changes

Raw strings are now supported in match patterns:

```diff
match "foo" { r#'foo'# => true, _ => false }
-false
+true
```
2024-12-13 06:55:57 -06:00
0b96962157 run cargo update manually to update dependencies (#14569)
#14556 Seems strange to me, because it downgrade `windows-target`
version.

So In this pr I tried to update it by hand, and also run `cargo update`
manually to see how it goes
2024-12-13 13:40:03 +08:00
ebce62629e fix: make exec command decrement SHLVL correctly (#14570)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

fixes #14567

Now NuShell's `exec` command will decrement `SHLVL` env value before
passing it to target executable.

It only works in interactive session, the same as `SHLVL`
initialization.

In addition, this PR also make a simple change to `SHLVL` initialization
(only remove an unnecessary type conversion).

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

None.

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

Formatted.

With interactively tested with several shells (bash, zsh, fish) and
cross-exec-ing them, it works well this time.

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-12-12 11:19:03 -06:00
7aacad3270 Set empty ENV_CONVERSIONS record by default (#14566)
# Description

With Windows Path case-insensitivity in place, we no longer need an
`ENV_CONVERSIONS` for `PATH`, as the
`nu_engine::env::convert_env_values()` handles it automatically.

This PR:

* Removes the default `ENV_CONVERSIONS` for path from `default_env.nu`
* Sets `ENV_CONVERSIONS` to an empty record (so it can be `merge`'d) in
`main()` instead

# User-Facing Changes

No behavioral changes - Users will now have an empty `ENV_CONVERSIONS`
at startup by default, but the behavior should not change.

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting
2024-12-12 10:50:28 -06:00
035b882db1 Update sample and scaffold files (#14568)
# Description

Tidying up some of the wording of the sample and scaffold files to align
with our current recommendations:

* Continue to generate a commented-only `env.nu` and `config.nu` on
first launch.
* The generated `env.nu` mentions that most configuration can be done in
`config.nu`
* The `sample_env.nu` mentions the same. I might try getting rid of
`config env --sample` entirely (it's new since 0.100 anyway).
* All configuration is now documented "in-shell" in `sample_config.nu`,
which can be viewed using `config nu --sample` - This means that
environment variables that used to be in `sample_env.nu` have been moved
to `sample_config.new`.

# User-Facing Changes

Doc-only

# Tests + Formatting

Doc-only changes, but:

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting

Need to work on updates to Config chapter
2024-12-12 10:43:20 -06:00
0872e9c3ae Allow both NU_PLUGIN_DIRS const and env at the same time (#14553)
# Description

Fix #14544 and is also the reciprocal of #14549.

Before: If both a const and env `NU_PLUGIN_DIRS` were defined at the
same time, the env paths would not be used.
After: The directories from `const NU_PLUGIN_DIRS` are searched for a
matching filename, and if not found, `$env.NU_PLUGIN_DIRS` directories
will be searched.

Before: `$env.NU_PLUGIN_DIRS` was unnecessary set both in main() and in
default_env.nu
After: `$env.NU_PLUGIN_DIRS` is only set in main()

Before: `$env.NU_PLUGIN_DIRS` was set to `plugins` in the config
directory
After: `$env.NU_PLUGIN_DIRS` is set to an empty list and `const
NU_PLUGIN_DIRS` is set to the directory above.

Also updates `sample_env.nu` to use the `const`

# User-Facing Changes

Most scenarios should work just fine as there continues to be an
`$env.NU_PLUGIN_DIRS` to append to or override.

However, there is a small chance of a breaking change if someone was
*querying* the old default `$env.NU_PLUGIN_DIRS`.

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

Also updated the `env` tests and added one for the `const`.

# After Submitting

Config doc updates
2024-12-11 11:41:06 -06:00
1a573d17c0 Revert "For # to start a comment, then it either need to be the first chara…" (#14560)
Reverts nushell/nushell#14548

I'm finding may oddities
2024-12-11 07:08:15 -06:00
4f20c370f9 Bump scraper from 0.21.0 to 0.22.0 (#14557)
Bumps [scraper](https://github.com/causal-agent/scraper) from 0.21.0 to
0.22.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/causal-agent/scraper/releases">scraper's
releases</a>.</em></p>
<blockquote>
<h2>v0.22.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Make current nightly version of Clippy happy. by <a
href="https://github.com/adamreichold"><code>@​adamreichold</code></a>
in <a
href="https://redirect.github.com/rust-scraper/scraper/pull/220">rust-scraper/scraper#220</a></li>
<li>RFC: Drop hash table for per-element attributes for more compact
sorted vector by <a
href="https://github.com/adamreichold"><code>@​adamreichold</code></a>
in <a
href="https://redirect.github.com/rust-scraper/scraper/pull/221">rust-scraper/scraper#221</a></li>
<li>Bump ego-tree to version 0.10.0 by <a
href="https://github.com/cfvescovo"><code>@​cfvescovo</code></a> in <a
href="https://redirect.github.com/rust-scraper/scraper/pull/222">rust-scraper/scraper#222</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/rust-scraper/scraper/compare/v0.21.0...v0.22.0">https://github.com/rust-scraper/scraper/compare/v0.21.0...v0.22.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="dcf5e0c781"><code>dcf5e0c</code></a>
Version 0.22.0</li>
<li><a
href="932ed03849"><code>932ed03</code></a>
Merge pull request <a
href="https://redirect.github.com/causal-agent/scraper/issues/222">#222</a>
from rust-scraper/bump-ego-tree</li>
<li><a
href="483ecab721"><code>483ecab</code></a>
Bump ego-tree to version 0.10.0</li>
<li><a
href="26f04ed47c"><code>26f04ed</code></a>
Merge pull request <a
href="https://redirect.github.com/causal-agent/scraper/issues/221">#221</a>
from rust-scraper/sorted-vec-instead-of-hash-table</li>
<li><a
href="ee66ee8d23"><code>ee66ee8</code></a>
Drop hash table for per-element attributes for more compact sorted
vector.</li>
<li><a
href="8d3e74bf36"><code>8d3e74b</code></a>
Merge pull request <a
href="https://redirect.github.com/causal-agent/scraper/issues/220">#220</a>
from rust-scraper/make-clippy-happy</li>
<li><a
href="47cc9de953"><code>47cc9de</code></a>
Make current nightly version of Clippy happy.</li>
<li>See full diff in <a
href="https://github.com/causal-agent/scraper/compare/v0.21.0...v0.22.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=scraper&package-manager=cargo&previous-version=0.21.0&new-version=0.22.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-11 09:40:09 +08:00
e4bb248142 For # to start a comment, then it either need to be the first chara… (#14548)
This PR should close
1. #10327 
1. #13667 
1. #13810 
1. #14129 

# Description
For `#` to start a comment, then it either need to be the first
character of the token or prefixed with ` ` (space).

So now you can do this:
``` 
~/Projects/nushell> 1..10 | each {echo test#testing }                                                                                                                     12/05/2024 05:37:19 PM
╭───┬──────────────╮
│ 0 │ test#testing │
│ 1 │ test#testing │
│ 2 │ test#testing │
│ 3 │ test#testing │
│ 4 │ test#testing │
│ 5 │ test#testing │
│ 6 │ test#testing │
│ 7 │ test#testing │
│ 8 │ test#testing │
│ 9 │ test#testing │
╰───┴──────────────╯
```  

# User-Facing Changes
It is a breaking change if anyone expected comments to start in the
middle of a string without any prefixing ` ` (space).

# Tests + Formatting
Did all: 
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

# After Submitting
I cant see that I need to update anything in [the
documentation](https://github.com/nushell/nushell.github.io) but please
point me in the direction if there is anything.

---------

Co-authored-by: Wind <WindSoilder@outlook.com>
2024-12-11 09:39:36 +08:00
dff6268d66 du: add -l/--long flag, remove -a/--all flag (#14407)
# Description
Closes:  #14387 
~To make it happen, just need to added `-l` flag to `du`, and pass it to
`DirBuilder`, `DirInfo`, `FileInfo`
Then tweak `impl From<DirInfo> for Value` and `impl From<FileInfo> for
Value` impl.~

---

Edit: this PR is going to:
1. Exclude directories and files columns by default
2. Added `-l/--long` flag to output directories and files columns
3. When running `du`, it will output the files as well. Previously it
doesn't output the size of file.

To make it happen, just need to added `-r` flag to `du`, and pass it to
`DirBuilder`, `DirInfo`, `FileInfo`
Then tweak `impl From<DirInfo> for Value` and `impl From<FileInfo> for
Value` impl.

And rename some variables.

# User-Facing Changes
`du` is no longer output `directories` and `file` columns by default,
added `-r` flag will show `directories` column, `-f` flag will show
`files` column.

```nushell
> du nushell
╭───┬────────────────────────────────────┬──────────┬──────────╮
│ # │                path                │ apparent │ physical │
├───┼────────────────────────────────────┼──────────┼──────────┤
│ 0 │ /home/windsoilder/projects/nushell │ 34.6 GiB │ 34.7 GiB │
├───┼────────────────────────────────────┼──────────┼──────────┤
│ # │                path                │ apparent │ physical │
╰───┴────────────────────────────────────┴──────────┴──────────╯
> du nushell --recursive --files # It outputs two more columns, `directories` and `files`, but the output is too long to paste here.
```
# Tests + Formatting
Added 1 test

# After Submitting
NaN
2024-12-10 11:22:56 -06:00
8f9aa1a250 Change help commands to use name from scope instead of the name from the declaration (#14490)
# Description

Before this PR, `help commands` uses the name from a command's
declaration rather than the name in the scope. This is problematic when
trying to view the help page for the `main` command of a module. For
example, `std bench`:

```nushell
use std/bench
help bench
# => Error: nu::parser::not_found
# => 
# =>   × Not found.
# =>    ╭─[entry #10:1:6]
# =>  1 │ help bench
# =>    ·      ──┬──
# =>    ·        ╰── did not find anything under this name
# =>    ╰────
```

This can also cause confusion when importing specific commands from
modules. Furthermore, if there are multiple commands with the same name
from different modules, the help text for _both_ will appear when
querying their help text (this is especially problematic for `main`
commands, see #14033):

```nushell
use std/iter
help iter find
# => Error: nu::parser::not_found
# => 
# =>   × Not found.
# =>    ╭─[entry #3:1:6]
# =>  1│ help iter find
# =>    ·      ────┬────
# =>    ·          ╰── did not find anything under this name
# =>    ╰────
help find
# => Searches terms in the input.
# => 
# => Search terms: filter, regex, search, condition
# => 
# => Usage:
# =>   > find {flags} ...(rest) 
# [...]
# => Returns the first element of the list that matches the
# => closure predicate, `null` otherwise
# [...]
# (full text omitted for brevity)
```

This PR changes `help commands` to use the name as it is in scope, so
prefixing any command in scope with `help` will show the correct help
text.


```nushell
use std/bench
help bench
# [help text for std bench]
use std/iter
help iter find
# [help text for std iter find]

use std
help std bench
# [help text for std bench]
help std iter find
# [help text for std iter find]
```

Additionally, the IR code generation for commands called with the
`--help` text has been updated to reflect this change.

This does have one side effect: when a module has a `main` command
defined, running `help <name>` (which checks `help aliases`, then `help
commands`, then `help modules`) will show the help text for the `main`
command rather than the module. The help text for the module is still
accessible with `help modules <name>`.

Fixes #10499, #10311, #11609, #13470, #14033, and #14402.
Partially fixes #10707.
Does **not** fix #11447.

# User-Facing Changes

* Help text for commands can be obtained by running `help <command
name>`, where the command name is the same thing you would type in order
to execute the command. Previously, it was the name of the function as
written in the source file.
  * For example, for the following module `spam` with command `meow`:
    ```nushell
    module spam { 
        # help text
        export def meow [] {}
    }
    ```
    * Before this PR:
* Regardless of how `meow` is `use`d, the help text is viewable by
running `help meow`.
    * After this PR:
* When imported with `use spam`: The `meow` command is executed by
running `spam meow` and the `help` text is viewable by running `help
spam meow`.
* When imported with `use spam foo`: The `meow` command is executed by
running `meow` and the `help` text is viewable by running `meow`.
* When a module has a `main` command defined, `help <module name>` will
return help for the main command, rather than the module. To access the
help for the module, use `help modules <module name>`.

# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting
N/A
2024-12-10 09:27:30 -06:00
7d2e8875e0 Make timeit take only closures as an argument (#14483)
# Description

Fixes #14401 where expressions passed to `timeit` will execute twice.
This PR removes the expression support for `timeit`, as this behavior is
almost exclusive to `timeit` and can hinder migration to the IR
evaluator in the future. Additionally, `timeit` used to be able to take
a `block` as an argument. Blocks should probably only be allowed for
parser keywords, so this PR changes `timeit` to instead only take
closures as an argument. This also fixes an issue where environment
updates inside the `timeit` block would affect the parent scope and all
commands later in the pipeline.

```nu
> timeit { $env.FOO = 'bar' }; print $env.FOO
bar
```

# User-Facing Changes

`timeit` now only takes a closure as the first argument.

# After Submitting

Update examples in the book/docs if necessary.
2024-12-10 23:08:53 +08:00
3515e3ee28 Remove grid icons deprecation warning (#14526)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

Noticed this TODO, so I did as it said.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
N/A (the functionality was already removed)

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
N/A

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
N/A

---------

Co-authored-by: Darren Schroeder <343840+fdncred@users.noreply.github.com>
2024-12-10 06:36:27 -06:00
cf82814606 Use const NU_LIB_DIRS in startup (#14549)
# Description

A slower, gentler alternative to #14531, in that we're just moving one
setting *out* of `default_env.nu` in this PR ;-).

All this does is transition from using `$env.NU_LIB_DIRS` in the startup
config to `const $NU_LIB_DIRS`. Also updates the `sample_env.nu` to
reflect the changes.

Details:

Before: `$env.NU_LIB_DIRS` was unnecessary set both in `main()` and in
`default_env.nu`
After: `$env.NU_LIB_DIRS` is only set in `main()`

Before: `$env.NU_LIB_DIRS` was set to `config-dir/scripts` and
`data-dir/completions`
After: `$env.NU_LIB_DIRS` is set to an empty list, and `const
NU_LIB_DIRS` is set to the directories above

Before: Using `--include-path (-I)` would set the `$env.NU_LIB_DIRS`
After: Using `--include-path (-I)` sets the constant `$NU_LIB_DIRS`

# User-Facing Changes

There shouldn't be any breaking changes here. The `$env.NU_LIBS_DIRS`
still works for most cases. There are a few areas we need to clean-up to
make sure that the const is usable (`nu-check`, et. al.) but they will
still work in the meantime with the older `$env` version.

# Tests + Formatting

* Changed the Type-check on the `$env` version.
* Added a type check for the const version.

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting

Doc updates
2024-12-10 06:36:05 -06:00
fc29d82614 Only run from_string conversion on strings (#14509)
# Description

#14249 loaded `convert_env_values()` several times to force more updates
to `ENV_CONVERSION`. This allows the user to treat variables as
structured data inside `config.nu` (and others).

Unfortunately, `convert_env_values()` did not originally anticipate
being called more than once, so it would attempt to re-convert values
that had already been converted. This usually leads to an error in the
conversion closure.

With this PR, values are only converted with `from_string` if they are
still strings; otherwise they are skipped and their existing value is
used.

# User-Facing Changes

No user-facing change when compared to 0.100, since closures written for
0.100's `ENV_CONVERSION` now work again without errors.

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
- 
# After Submitting

Will remove the "workaround" from the Config doc preview.
2024-12-10 06:14:43 -06:00
75ced3e945 Fix table command when targeting WASM (#14530)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
In this PR I made the `cwd` parameter in the functions from the `table`
command not used when targeting `not(feature = "os)`. As without an OS
and therefore without filesystem we don't have any real concept of a
current working directory. This allows using the `table` command in the
WASM context.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
None.

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
My tests timed out on the http stuff but I cannot think why this would
trigger a test failure. Let's see what the CI finds out.

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-12-10 06:10:28 -06:00
685dc78739 update to reedline 9eb3c2d (#14541)
# Description

This PR updates nushell to the latest commit of reedline that fixes some
rendering issues on window resize.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-12-08 08:43:36 -06:00
9daa5f9177 Fix silent failure of parsing input output types (#14510)
- This PR should fix/close:
  - #11266
  - #12893 
  - #13736 
  - #13748
  - #14170
- It doesn't fix #13736 though unfortunately. The issue there is at a
different level to this fix (I think probably in the lexing somewhere,
which I haven't touched).

# The Problem

The linked issues have many examples of the problem and the related
confusion it causes, but I'll give some more examples here for
illustration. It boils down to the following:

This doesn't type check (good):
```nu
def foo []: string -> int { false }
```

This does (bad):
```nu
def foo [] : string -> int { false }
```

Because the parser is completely ignoring all the characters. This also
compiles in 0.100.0:
```nu
def blue [] Da ba Dee da Ba da { false }
```

And this also means commands which have a completely fine type, but an
extra space before `:`, lose that type information and end up as `any ->
any`, e.g.
```nu
def foo [] : int -> int {$in + 3}
```
```bash
$ foo --help
Input/output types:
  ╭───┬───────┬────────╮
  │ # │ input │ output │
  ├───┼───────┼────────┤
  │ 0 │ any   │ any    │
  ╰───┴───────┴────────╯
```

# The Fix

Special thank you to @texastoland whose draft PR (#12358) I referenced
heavily while making this fix.

That PR seeks to fix the invalid parsing by disallowing whitespace
between `[]` and `:` in declarations, e.g. `def foo [] : int -> any {}`

This PR instead allows the whitespace while properly parsing the type
signature. I think this is the better choice for a few reasons:
- The parsing is still straightforward and the information is all there
anyway,
- It's more consistent with type annotations in other places, e.g. `do
{|nums : list<int>| $nums | describe} [ 1 2 3 ]` from the [Type
Signatures doc
page](https://www.nushell.sh/lang-guide/chapters/types/type_signatures.html)
- It's more consistent with the new nu parser, which allows `let x :
bool = false` (current nu doesn't, but this PR doesn't change that)
- It will be less disruptive and should only break code where the types
are actually wrong (if your types were correct, but you had a space
before the `:`, those declarations will still compile and now have more
type information vs. throwing an error in all cases and requiring spaces
to be deleted)
- It's the more intuitive syntax for most functional programmers like
myself (haskell/lean/coq/agda and many more either allow or require
whitespace for type annotations)

I don't use Rust a lot, so I tried to keep most things the same and the
rest I wrote as if it was Haskell (if you squint a bit). Code
review/suggestions very welcome. I added all the tests I could think of
and `toolkit check pr` gives it the all-clear.

# User-Facing Changes

This PR meets part of the goal of #13849, but doesn't do anything about
parsing signatures twice and doesn't do much to improve error messages,
it just enforces the existing errors and error messages.

This will no doubt be a breaking change, mostly because the code is
already broken and users don't realise yet (one of my personal scripts
stopped compiling after this fix because I thought `def foo [] -> string
{}` was valid syntax). It shouldn't break any type-correct code though.
2024-12-07 09:55:15 -06:00
69fbfb939f lsp and --ide-check fix for path self related diagnostics (#14538)
# Description

fixes
[this](https://github.com/nushell/nushell/pull/14303#issuecomment-2525100480)
where lsp and ide integration would produce the following error

---

```sh
nu --ide-check 100 "/path/to/env.nu"
```
with
```nu
const const_env = path self
```
would lead to
```
Error: nu:🐚:file_not_found

  × File not found
   ╭─[/path/to/env.nu:1:19]
 1 │ const const_env = path self
   ·                   ────┬────
   ·                       ╰── Couldn't find current file
   ╰────
```

# Tests + Formatting
- 🟢 `cargo fmt --all`
- 🟢 `cargo clippy --workspace`
2024-12-07 09:46:52 -06:00
f0ecaabd7d Expose "to html" command (#14536)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
In this PR I exposed the struct `ToHtml` that comes from `nu-cmd-extra`.
I know this command isn't in a best state and should be changed in some
way in the future but having the struct exposed makes transforming data
to html way more simple for external tools as the `PipelineData` can
easily be placed in the `ToHtml::run` method.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

None.

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

I did `fmt` and `check` but not `test`, shouldn't break any tests
regardless.

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
For the demo page or my jupyter kernel would this make my life easiert.
2024-12-07 07:28:14 -06:00
c16f49cf19 add coreutils to search terms 2024-12-07 07:20:46 -06:00
9411458689 rewrite error message to not use the word function (#14533)
# Description

After [the discussion on
discord](https://discord.com/channels/601130461678272522/601130461678272524/1314600410882904125)
I propose to rephrase the error message to avoid using the word
`function`.

From `Return used outside of function` to `Return used outside of custom
command or closure`


![image](https://github.com/user-attachments/assets/d14331c8-e381-4b04-8709-bd6064e0791e)



# User-Facing Changes

None

# Tests + Formatting

toolkit.nu fmt is good
2024-12-06 18:09:11 -06:00
8771872d86 Add path self command for getting absolute paths to files at parse time (#14303)
Alternative solution to:
- #12195 

The other approach:
- #14305

# Description
Adds ~`path const`~ `path self`, a parse-time only command for getting
the absolute path of the source file containing it, or any file relative
to the source file.

- Useful for any script or module that makes use of non nuscript files.
- Removes the need for `$env.CURRENT_FILE` and `$env.FILE_PWD`.
- Can be used in modules, sourced files or scripts.

# Examples

```nushell
# ~/.config/nushell/scripts/foo.nu
const paths = {
    self: (path self),
    dir: (path self .),
    sibling: (path self sibling),
    parent_dir: (path self ..),
    cousin: (path self ../cousin),
}

export def main [] {
    $paths
}
```

```nushell
> use foo.nu
> foo
╭────────────┬────────────────────────────────────────────╮
│ self       │ /home/user/.config/nushell/scripts/foo.nu  │
│ dir        │ /home/user/.config/nushell/scripts         │
│ sibling    │ /home/user/.config/nushell/scripts/sibling │
│ parent_dir │ /home/user/.config/nushell                 │
│ cousin     │ /home/user/.config/nushell/cousin          │
╰────────────┴────────────────────────────────────────────╯
```


Trying to run in a non-const context
```nushell
> path self
Error:   × this command can only run during parse-time
   ╭─[entry #1:1:1]
 1 │ path self 
   · ─────┬────
   ·      ╰── can't run after parse-time
   ╰────
  help: try assigning this command's output to a const variable
```

Trying to run in the REPL i.e. not in a file
```nushell
> const foo = path self
Error:   × Error: nu:🐚:file_not_found
  │ 
  │   × File not found
  │    ╭─[entry #3:1:13]
  │  1 │ const foo = path self
  │    ·             ─────┬────
  │    ·                  ╰── Couldn't find current file
  │    ╰────
  │ 
   ╭─[entry #3:1:13]
 1 │ const foo = path self
   ·             ─────┬────
   ·                  ╰── Encountered error during parse-time evaluation
   ╰────
```

# Comparison with #14305
## Pros
- Self contained implementation, does not require changes in the parser.
- More concise usage, especially with parent directories.

---------

Co-authored-by: Darren Schroeder <343840+fdncred@users.noreply.github.com>
2024-12-06 08:19:08 -06:00
cda9ae1e42 Shorten --max-time in tests and use a more stable error check (#14494)
- fixes flakey tests from solving #14241

# Description
This is a preliminary fix for the flaky tests and also
shortened the `--max-time` in the tests.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->



---------

Signed-off-by: Alex Kattathra Johnson <alex.kattathra.johnson@gmail.com>
2024-12-06 13:03:13 +01:00
81d68cd478 Documentation and error handling around polars with-column --name (#14527)
The `--name` flag of `polars with-column` only works when used with an
eager dataframe. I will not work with lazy dataframes and it will not
work when used with expressions (which forces a conversion to a
lazyframe). This pull request adds better documentation to the flags and
errors messages when used in cases where it will not work.
2024-12-06 05:17:18 -06:00
4c9078cccc add file column to scope modules output (#14524)
# Description

This PR adds a `file` column to the `scope modules` output table.


![image](https://github.com/user-attachments/assets/d69f3dec-3f9a-4ff9-b971-1fd533520ec7)


# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-12-05 21:36:35 -06:00
f51828d049 Improve sleep example using multiple durations (#14520)
It is to cheat our parser and not to repeat yourself.
2024-12-05 07:54:14 -06:00
d97562f6e8 fix multiline strings in NDNUON (#14519)
- should close https://github.com/nushell/nushell/issues/14517

# Description
this will change `to ndnuon` so that newlines are encoded as a literal
`\n` which `from ndnuon` is already able to handle

# User-Facing Changes
users should be able to encode multiline strings in NDNUON

# Tests + Formatting
new tests have been added:
- they don't pass on the first commit
- they do pass with the fix

# After Submitting
2024-12-05 07:53:33 -06:00
234484b6f8 normalize special characters in module names to allow variable access (#14353)
Fixes #14252

# User-Facing Changes

- Special characters in module names are replaced with underscores when
  importing constants, preventing "expected valid variable name":

```nushell
> module foo-bar { export const baz = 1 }
> use foo-bar
> $foo_bar.baz
```

- "expected valid variable name" errors now include a suggestion list:

```nushell
> module foo-bar { export const baz = 1 }
> use foo-bar
> $foo-bar
Error: nu::parser::parse_mismatch_with_did_you_mean

  × Parse mismatch during operation.
   ╭─[entry #1:1:1]
 1 │ $foo-bar;
   · ────┬───
   ·     ╰── expected valid variable name. Did you mean '$foo_bar'?
   ╰────
```
2024-12-05 21:35:15 +08:00
3bd45c005b Change tests which may invoke externals to use non-conflicting names (#14516)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description

Fixes #14515
Also tweaks the fix from #11261 _just in case_ someone has a `foo`
executable


# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
N/A

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-12-04 19:26:48 -06:00
05b7c1fffa Update roxmltree from 0.19 to 0.20, the latest version (#14513)
# Description


This simply updates `roxmltree` from 0.19.0 to 0.20.0, the latest
release, with no code changes required.

# User-Facing Changes


N/A
2024-12-04 21:39:45 +01:00
a332712275 add function to make env vars case-insensitive (#14390)
# Description

This PR adds a new function that allows one to get an env var
case-insensitively. I did this so we can hopefully stop having problems
when Windows has HKLM as path and HKCU as Path.

Instead of just changing every function that used the original one, I
chose the ones that I thought were specific to getting the path. I
didn't want to go all in and make every env get case insensitive, but
maybe we should? 🤷🏻‍♂️

closes #12676

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-12-03 20:47:58 -06:00
b2d8bd08f8 allow select to stream more (#14492)
# Description

closes https://github.com/nushell/nushell/issues/14487

This PR tries to allow the `select` to stream better by changing the for
loops that collected the output into a `Vec<Value>` prior to returning
it into a map that returns the data as it is processed.

One curiosity, `select` transforms the input into a `PipelineIterator`.
If I remove this code, it still passes all tests. I'm not sure all this
`PipelineIterator` code is even needed. I left it for someone to tell me
if it's necessary.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-12-03 20:45:31 -06:00
217be24963 #14238 Now the file completion is triggered on a custom command after the first parameter. (#14481)
- this PR should close #14238

# Description
Solved as described here (First suggestion):
https://github.com/nushell/nushell/issues/14238#issuecomment-2506387012

Below I make the example from the issue, it shows that the completion
now works past the first parameter.
```
~/Projects/nushell> def list [...args] {                                                                                                  11/30/2024 03:21:24 PM
:::     $args
:::     | each {
:::         open $args
:::     }
::: }
~/Projects/nushell> cd tests/fixtures/completions/                                                                                        11/30/2024 03:25:24 PM
~/Projects/nushell/tests/fixtures/completions| list custom_completion.nu                                                                  11/30/2024 03:25:35 PM
another/               custom_completion.nu   directory_completion/  nushell
test_a/                test_b/                .hidden_file           .hidden_folder/
``` 

# User-Facing Changes
The changes introduced to completions in
`baadaee0163a5066ae73509ff6052962b3422673` now does not return if it did
not find "Operator completions".

This could have impact on more than just custom commands, but it could
be seemed as making everything a bit more robust.

# Tests + Formatting
I ran all of:  
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

# After Submitting
I do not think there is any need to update [the
documentation](https://github.com/nushell/nushell.github.io), right?

---------

Co-authored-by: Daniel Winther Petersen <daniel.winther.petersen@subaio.com>
2024-12-03 21:39:11 -05:00
bf457cd4fc Bump indexmap from 2.6.0 to 2.7.0 (#14505)
Bumps [indexmap](https://github.com/indexmap-rs/indexmap) from 2.6.0 to
2.7.0.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/indexmap-rs/indexmap/blob/master/RELEASES.md">indexmap's
changelog</a>.</em></p>
<blockquote>
<h2>2.7.0 (2024-11-30)</h2>
<ul>
<li>Added methods <code>Entry::insert_entry</code> and
<code>VacantEntry::insert_entry</code>, returning
an <code>OccupiedEntry</code> after insertion.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="539b401151"><code>539b401</code></a>
Merge pull request <a
href="https://redirect.github.com/indexmap-rs/indexmap/issues/361">#361</a>
from cuviper/insert_entry</li>
<li><a
href="998edb12fe"><code>998edb1</code></a>
Release 2.7.0</li>
<li><a
href="2a0ca97417"><code>2a0ca97</code></a>
Add <code>{Entry,VacantEntry}::insert_entry</code></li>
<li><a
href="dceb0f0598"><code>dceb0f0</code></a>
Merge pull request <a
href="https://redirect.github.com/indexmap-rs/indexmap/issues/360">#360</a>
from cuviper/collect_vec_list</li>
<li><a
href="c095322249"><code>c095322</code></a>
ci: downgrade hashbrown for 1.63</li>
<li><a
href="7d8cef8b4b"><code>7d8cef8</code></a>
Use rayon-1.9.0's <code>collect_vec_list</code></li>
<li>See full diff in <a
href="https://github.com/indexmap-rs/indexmap/compare/2.6.0...2.7.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=indexmap&package-manager=cargo&previous-version=2.6.0&new-version=2.7.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-04 09:57:05 +08:00
88a8e986eb Bump titlecase dependency (#14502)
# Description

v3 drops the dependency on joinery, as well as on lazy_static. The MSRV
is bumped to 1.70.0 but that is still way below what nushell requires.

# User-Facing Changes

N/A

# Tests + Formatting

All tests pass (including nu-command which is the direct user)

# After Submitting

N/A

Signed-off-by: Michel Lind <salimma@fedoraproject.org>
2024-12-04 09:40:23 +08:00
5f0567f8df Bump multipart-rs from 0.1.11 to 0.1.13 (#14506)
Bumps [multipart-rs](https://github.com/feliwir/multipart-rs) from
0.1.11 to 0.1.13.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/feliwir/multipart-rs/commits">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=multipart-rs&package-manager=cargo&previous-version=0.1.11&new-version=0.1.13)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-04 09:38:27 +08:00
a980b9d0a6 Bump ureq from 2.10.1 to 2.12.0 (#14507)
Bumps [ureq](https://github.com/algesten/ureq) from 2.10.1 to 2.12.0.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/algesten/ureq/blob/main/CHANGELOG.md">ureq's
changelog</a>.</em></p>
<blockquote>
<h1>2.12.0</h1>
<ul>
<li>Bump MSRV 1.67 -&gt; 1.71 because rustls will soon adopt it (<a
href="https://redirect.github.com/algesten/ureq/issues/905">#905</a>)</li>
<li>Unpin rustls dep (&gt;=0.23.19) (<a
href="https://redirect.github.com/algesten/ureq/issues/905">#905</a>)</li>
</ul>
<h1>2.11.0</h1>
<ul>
<li>Fixes for changes to cargo-deny (<a
href="https://redirect.github.com/algesten/ureq/issues/882">#882</a>)</li>
<li>Pin rustls dep on 0.23.19 to keep MSRV 1.67 (<a
href="https://redirect.github.com/algesten/ureq/issues/878">#878</a>)</li>
<li>Bump MSRV 1.63 -&gt; 1.67 due to time crate (<a
href="https://redirect.github.com/algesten/ureq/issues/878">#878</a>)</li>
<li>Re-export rustls (<a
href="https://redirect.github.com/algesten/ureq/issues/813">#813</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="3d6048e5b0"><code>3d6048e</code></a>
2.12.0</li>
<li><a
href="3e09aea948"><code>3e09aea</code></a>
Fix http crate confusion</li>
<li><a
href="cd6a0849bb"><code>cd6a084</code></a>
Github CI only run 1.71</li>
<li><a
href="dfab2607d6"><code>dfab260</code></a>
Update readme</li>
<li><a
href="af2143be89"><code>af2143b</code></a>
Update changelog</li>
<li><a
href="feea74b343"><code>feea74b</code></a>
Unpin rustls and bump MSRV to 1.71</li>
<li><a
href="937b1da311"><code>937b1da</code></a>
Update changelog</li>
<li><a
href="a44804833c"><code>a448048</code></a>
Note in readme about MSRV</li>
<li><a
href="24686d0829"><code>24686d0</code></a>
2.11.0</li>
<li><a
href="10869e22c0"><code>10869e2</code></a>
Fix incorrect feature flags</li>
<li>Additional commits viewable in <a
href="https://github.com/algesten/ureq/compare/2.10.1...2.12.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=ureq&package-manager=cargo&previous-version=2.10.1&new-version=2.12.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-04 09:37:52 +08:00
08504f6e06 Bump bytes from 1.8.0 to 1.9.0 (#14508)
Bumps [bytes](https://github.com/tokio-rs/bytes) from 1.8.0 to 1.9.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/tokio-rs/bytes/releases">bytes's
releases</a>.</em></p>
<blockquote>
<h2>Bytes v1.9.0</h2>
<h1>1.9.0 (November 27, 2024)</h1>
<h3>Added</h3>
<ul>
<li>Add <code>Bytes::from_owner</code> to enable externally-allocated
memory (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/742">#742</a>)</li>
</ul>
<h3>Documented</h3>
<ul>
<li>Fix typo in Buf::chunk() comment (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/744">#744</a>)</li>
</ul>
<h3>Internal changes</h3>
<ul>
<li>Replace BufMut::put with BufMut::put_slice in Writer impl (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/745">#745</a>)</li>
<li>Rename hex_impl! to fmt_impl! and reuse it for fmt::Debug (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/743">#743</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/tokio-rs/bytes/blob/master/CHANGELOG.md">bytes's
changelog</a>.</em></p>
<blockquote>
<h1>1.9.0 (November 27, 2024)</h1>
<h3>Added</h3>
<ul>
<li>Add <code>Bytes::from_owner</code> to enable externally-allocated
memory (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/742">#742</a>)</li>
</ul>
<h3>Documented</h3>
<ul>
<li>Fix typo in Buf::chunk() comment (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/744">#744</a>)</li>
</ul>
<h3>Internal changes</h3>
<ul>
<li>Replace BufMut::put with BufMut::put_slice in Writer impl (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/745">#745</a>)</li>
<li>Rename hex_impl! to fmt_impl! and reuse it for fmt::Debug (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/743">#743</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d0a14deeb5"><code>d0a14de</code></a>
chore: prepare bytes v1.9.0 (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/748">#748</a>)</li>
<li><a
href="54f1c26f69"><code>54f1c26</code></a>
Rename hex_impl! to fmt_impl! and reuse it for fmt::Debug (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/743">#743</a>)</li>
<li><a
href="4cd8969e85"><code>4cd8969</code></a>
Replace <code>BufMut::put</code> with <code>BufMut::put_slice</code> in
Writer impl (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/745">#745</a>)</li>
<li><a
href="2d996a2b41"><code>2d996a2</code></a>
Fix typo in <code>Buf::chunk()</code> comment (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/744">#744</a>)</li>
<li><a
href="30ee8e9cba"><code>30ee8e9</code></a>
Add <code>Bytes::from_owner</code> (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/742">#742</a>)</li>
<li>See full diff in <a
href="https://github.com/tokio-rs/bytes/compare/v1.8.0...v1.9.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=bytes&package-manager=cargo&previous-version=1.8.0&new-version=1.9.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-04 09:37:47 +08:00
da66484578 Bump crate-ci/typos from 1.28.1 to 1.28.2 (#14503)
Bumps [crate-ci/typos](https://github.com/crate-ci/typos) from 1.28.1 to
1.28.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/crate-ci/typos/releases">crate-ci/typos's
releases</a>.</em></p>
<blockquote>
<h2>v1.28.2</h2>
<h2>[1.28.2] - 2024-12-02</h2>
<h3>Fixes</h3>
<ul>
<li>Don't correct <code>parametrize</code> variants</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/crate-ci/typos/blob/master/CHANGELOG.md">crate-ci/typos's
changelog</a>.</em></p>
<blockquote>
<h2>[1.28.2] - 2024-12-02</h2>
<h3>Fixes</h3>
<ul>
<li>Don't correct <code>parametrize</code> variants</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="2872c382bb"><code>2872c38</code></a>
chore: Release</li>
<li><a
href="6af7c259ed"><code>6af7c25</code></a>
docs: Update changelog</li>
<li><a
href="2d9a242c9f"><code>2d9a242</code></a>
Merge pull request <a
href="https://redirect.github.com/crate-ci/typos/issues/1165">#1165</a>
from epage/fix</li>
<li><a
href="97bbab80c8"><code>97bbab8</code></a>
fix(dict): Don't correct parametrized</li>
<li><a
href="679c99cf66"><code>679c99c</code></a>
test(dict): Consistenty filter out unverified entries</li>
<li>See full diff in <a
href="https://github.com/crate-ci/typos/compare/v1.28.1...v1.28.2">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=crate-ci/typos&package-manager=github_actions&previous-version=1.28.1&new-version=1.28.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-04 09:37:19 +08:00
424efdaafe Make glob stream (#14495)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

Makes the `glob` command stream

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

The glob command now streams

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
N/A
2024-12-03 15:21:09 -06:00
a65a7df209 Add remove as a search term on drop commands (#14493)
# Description
Better discoverability of `drop` subcommands
"I want to remove items by index" -> `drop nth`
h/t @amtoine

# User-Facing Changes
More search terms
2024-12-03 16:59:37 +01:00
c63bb81c3e Convert Filesize to Int (#14491)
# Description
Fixes the conversion of Value::Filesize to Value::Int allowing things
like `ps | polars into-df` to work correctly.
2024-12-03 06:08:41 -06:00
a70e77ba48 Update procfs and which dependencies to their latest releases (#14489)
# Description


This simply updates `procfs` from 0.16.0 to 0.17.0 and `which` from 6.0
to 7.0 – in each case, to the latest release – with no code changes
required.



# Notes

The release notes for `procfs` 0.17.0 are at
https://github.com/eminence/procfs/releases/tag/v0.17.0.

The release notes for `which` 7.0.0 are at
https://github.com/harryfei/which-rs/releases/tag/7.0.0.
2024-12-02 21:20:19 +01:00
c8b5909ee8 Feature: PWD-per-drive to facilitate working on multiple drives at Windows (#14411)
This PR implements PWD-per-drive as described in discussion #14355

# Description
On Windows, CMD or PowerShell assigns each drive its own current
directory. For example, if you are in 'C:\Windows', switch to 'D:', and
navigate to 'D:\Game', you can return to 'C:\Windows' by simply typing
'C:'.

This PR enables Nushell on Windows to have the same capability, allowing
each drive to maintain its own PWD (Present Working Directory).

# User-Facing Changes
Currently, 'cd' or 'ls' only accept absolute paths if the path starts
with 'C:' or another drive letter. With PWD-per-drive, users can use
'cd' (or auto cd) and 'ls' in the same way as 'cd' and 'dir' in
PowerShell, or similarly to 'cd' and 'dir' in CMD (noting that cd in CMD
has slightly different behavior, 'cd' for another drive only changes
current directory of that drive, but does not switch there).

Interaction example on switching between drives:
```Nushell
~>D:
D:\>cd Test
D:\Test\>C:
~>D:
D:\Test\>C:
~>cd D:..
D:\>C:x/../y/../z/..
~>cd D:Test\Test
D:\Test\Test>C:
~>D:...
D:\>
```
Interaction example on auto-completion at cmd line:
```Nushell
~>cd D:\test[Enter]
D:\test>~[Enter]
~>D:[TAB]
~>D:\test[Enter]
D:\test>c:.c[TAB]
c:\users\nushell\.cargo\ c:\users\nushell\.config\
```
Interaction example on pass PWD-per-drive to child process: (Note CMD
will use it, but PowerShell will ignore it though it still prepares such
info for child process)
```Nushell
~>cd D:\Test
D:\Test>cd E:\Test
E:\Test\>~
~>CMD
Microsoft Windows [Version 10.0.22631.4460]
(c) Microsoft Corporation. All rights reserved.

C:\Users\Nushell>d:
D:\Test>e:
E:\Test>
```

# Brief Change Description
 
1.Added 'crates/nu-path/src/pwd_per_drive.rs' to implement a 26-slot
array mapping drive letters to PWDs. Test cases are included in the same
file, along with a doctest for the usage of PWD-per-drive.
2. Modified 'crates/nu-path/src/lib.rs' to declare module of
pwd_per_drive and export struct for PWD-per-drive.
3. Modified 'crates/nu-protocol/src/engine/stack.rs' to sync PWD when
set_cwd() is called. Add PWD-per-drive map as member. Clone between
parent and child. Stub/proxy for nu_path::expand_path_with() to
facilitate filesystem commands using PWD-per-drive.
4. Modified 'crates/nu-cli/src/repl.rs' auto_cd uses PWD-per-drive to
expand path.
5. Modified 'crates/nu-cli/src/completions/completion_common.rs' to
expand relative path when press [TAB] at command line.
6. Modified 'crates/nu-engine/src/env.rs' to collect PWD-per-drive info
as env vars for child process as CMD or PowerShell do, this can let
child process inherit PWD-per-drive info.
7. Modified 'crates/nu-engine/src/eval.rs', caller clone callee's
PWD-per-drive info, supporting 'def --env'
8. Modified 'crates/nu-engine/src/eval_ir.rs', 'def --env' support.
Remove duplicated fn redirect_env()
9. Modified 'src/run.rs', to init PWD-per-drive when startup.

filesystem commands that modified:
1. Modified 'crates/nu-command/src/filesystem/cd.rs', 1 line change to
use stackscoped PWD-per-drive.
Other commands, commit pending....

Local test def --env OK:
```nushell
E:\study\nushell> def --env env_cd_demo [] {                 
:::     cd ~
:::     cd D:\Project
:::     cd E:Crates
::: }
E:\study\nushell>                                                   
E:\study\nushell> def cd_no_demo [] {                   
:::     cd ~
:::     cd D:\Project
:::     cd E:Crates
::: }
E:\study\nushell> cd_no_demo                                 
E:\study\nushell> C:
C:\>D:
D:\>E:                                     
E:\study\nushell>env_cd_demo
E:\study\nushell\crates> C:
~>D:
D:\Project>E:                                     
E:\study\nushell\crates>     
```

# Tests + Formatting

- `cargo fmt --all -- --check` passed.
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used`
passed.
- `cargo test --workspace` passed on Windows developer mode and Ubuntu.
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` passed.
- nushell:
```
> use toolkit.nu  # or use an `env_change` hook to activate it automatically
> toolkit check pr
> ```
passed

---------

Co-authored-by: pegasus.cadence@gmail.com <pegasus.cadence@gmail.com>
2024-12-02 12:17:46 -06:00
3b0ba923e4 Fix missing installed_plugins field in version command (#14488)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
In #14418 I added the `plugin` feature to the crate `nu-cmd-lang`. I
forgot to include that feature in the `nushell/plugin` feature. This
caused the `version` command to not have the `installed_plugins` field.
With this PR I fixed that.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

None 😇

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

Running `version` shows `installed_plugins` again.

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->

That should be it.
2024-12-02 09:02:06 -06:00
1940b36e07 Add environment variables for sourced files (#14486)
# Description

I always wondered why the module env vars `CURRENT_FILE`, `FILE_PWD`,
`PROCESS_PATH` weren't available in the source command. I tried to add
them here. I think it could be helpful but I'm not sure. I'm also not
sure this hack is what we should do but I thought I'd put it out there
for fun.

Thoughts?

### Run Module (works as it did before)

```nushell
❯ open test_module.nu
def main [] {
  print $"$env.CURRENT_FILE = ($env.CURRENT_FILE?)"
  print $"$env.FILE_PWD = ($env.FILE_PWD?)"
  print $"$env.PROCESS_PATH = ($env.PROCESS_PATH?)"
}
❯ nu test_module.nu
$env.CURRENT_FILE = /Users/fdncred/src/nushell/test_module.nu
$env.FILE_PWD = /Users/fdncred/src/nushell
$env.PROCESS_PATH = test_module.nu
```
### Use Module (works as it did before)
```nushell
❯ open test_module2.nu
export-env {
  print $"$env.CURRENT_FILE = ($env.CURRENT_FILE?)"
  print $"$env.FILE_PWD = ($env.FILE_PWD?)"
  print $"$env.PROCESS_PATH = ($env.PROCESS_PATH?)"
}
❯ use test_module2.nu
$env.CURRENT_FILE = /Users/fdncred/src/nushell/test_module.nu
$env.FILE_PWD = /Users/fdncred/src/nushell
$env.PROCESS_PATH =
```
### Sourced non-module script (this is the new part)

> [!NOTE] 
> Note: We intentionally left out PROCESS_PATH since it's supposed to
> to work like argv[0] in C, which is the name of the program being
executed.
> Since we're not executing a program, we don't need to set it.


```nushell
❯ open test_source.nu
print $"$env.CURRENT_FILE = ($env.CURRENT_FILE?)"
print $"$env.FILE_PWD = ($env.FILE_PWD?)"
print $"$env.PROCESS_PATH = ($env.PROCESS_PATH?)"
❯ source test_source.nu
$env.CURRENT_FILE = /Users/fdncred/src/nushell/test_source.nu
$env.FILE_PWD = /Users/fdncred/src/nushell
$env.PROCESS_PATH = 
```

Also, what is PROCESS_PATH even supposed to be?

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-12-02 06:19:20 -06:00
dfec687a46 term query: refactor, add --beginning flag (#14446)
# Description

- Refactor code to be simpler.
- Make the mentioned changes.
- `scopeguard` is added as a direct dependency. Helps simplify the code.
Rather than roll an ad-hoc version of it myself, I thought it would be
better to use `scopeguard` as it was already an indirect dependency.

# User-Facing Changes

- Add `--beginning` flag, which is used to validate the response and
provide early errors in case of unexpected inputs.
- Both `terminator` and `beginning` sequences (when provided) are not
included in the command's output. Turns out they are almost always
removed from the output, and because they are known beforehand they can
be added back by the user.
2024-12-01 20:02:48 -06:00
bcd85b6f3e Remove duplicate implementations of CallExt::rest (#14484)
# Description

Removes unnecessary usages of `Call::rest_iter_flattened` and
`get_rest_for_glob_pattern` and replaces them with `CallExt::rest`.

# User-Facing Changes

None
2024-12-01 15:03:45 +01:00
c4b919b24c enable test_cp_recurse on macos (#14358)
# Description

This PR enables some tests that were disabled on macos.

We shall see if the CI passes. (Update: CI has passed.)

# User-Facing Changes

Should be no user-facing changes as only a test-file is modified.

# Tests + Formatting

Test coverage should increase

Co-authored-by: Jasha <jsimpson@hiddenroad.com>
2024-12-01 05:59:40 -06:00
c560bac13f Add --long flag for sys cpu (#14485)
# Description

Fixes #14470 where the `sys cpu` command is slow. This was done by
removing the `cpu_usage` column from the default output, since it takes
400ms to calculate. Instead a `--long` flag was added that, when
provided, adds back the `cpu_usage` column.

```nu
# Before
> bench { sys cpu | length } | get mean
401ms 591µs 896ns

# After
> bench { sys cpu | length } | get mean
500µs 13ns # around 1-2ms in practice
```

# User-Facing Changes

- `sys cpu` no longer has a `cpu_usage` column by default.
- Added  a `--long` flag for `sys cpu` to add back the removed column.
2024-12-01 05:56:42 -06:00
88d27fd607 explore: add more less key bindings and add Transition::None (#14468)
# Description
The `explore` command is `less`-like, but it's missing the `Emacs`
keybindings for up/down and PageUp/PageDown as well as the "q" to quit
out. When I looked into adding those additional keybindings, I noticed
there was a lot of duplicated code in the various views, so I refactored
the code into a new `trait CursorMoveHandler`. I also noticed that there
was an existing `TODO: should we add a noop transition instead of doing
Option<Transition> everywhere?` comment in the code. I went ahead and
implemented a new `Transition::None`, and that made the new `trait
CursorMoveHandler` code MUCH cleaner, in addition to making some of the
old code a little cleaner as well.

# User-Facing Changes
Users that are used to the keybindings for `less` should feel much more
comfortable using `explore`.

# Tests + Formatting
Unfortunately, there aren't any existing tests for the `explore`
command, so I didn't know where I should add new tests to cover my code
changes.

---------

Co-authored-by: paulie4 <203125+paulie4@users.noreply.github.com>
Co-authored-by: Darren Schroeder <343840+fdncred@users.noreply.github.com>
2024-11-30 08:22:52 -06:00
3d5f853b03 Start to Add WASM Support Again (#14418)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
The [nushell/demo](https://github.com/nushell/demo) project successfully
demonstrated running Nushell in the browser using WASM. However, the
current version of Nushell cannot be easily built for the
`wasm32-unknown-unknown` target, the default for `wasm-bindgen`.

This PR introduces initial support for the `wasm32-unknown-unknown`
target by disabling OS-dependent features such as filesystem access, IO,
and platform/system-specific functionality. This separation is achieved
using a new `os` feature in the following crates:

 - `nu-cmd-lang`
 - `nu-command`
 - `nu-engine`
 - `nu-protocol`

The `os` feature includes all functionality that interacts with an
operating system. It is enabled by default, but can be disabled using
`--no-default-features`. All crates that depend on these core crates now
use `--no-default-features` to allow compilation for WASM.

To demonstrate compatibility, the following script builds all crates
expected to work with WASM. Direct user interaction, running external
commands, working with plugins, and features requiring `openssl` are out
of scope for now due to their complexity or reliance on C libraries,
which are difficult to compile and link in a WASM environment.

```nushell
[ # compatible crates
	"nu-cmd-base",
	"nu-cmd-extra",
	"nu-cmd-lang",
	"nu-color-config",
	"nu-command",
	"nu-derive-value",
	"nu-engine",
	"nu-glob",
	"nu-json",
	"nu-parser",
	"nu-path",
	"nu-pretty-hex",
	"nu-protocol",
	"nu-std",
	"nu-system",
	"nu-table",
	"nu-term-grid",
	"nu-utils",
	"nuon"
] | each {cargo build -p $in --target wasm32-unknown-unknown --no-default-features}
```

## Caveats
This PR has a few caveats:
1. **`miette` and `terminal-size` Dependency Issue**
`miette` depends on `terminal-size`, which uses `rustix` when the target
is not Windows. However, `rustix` requires `std::os::unix`, which is
unavailable in WASM. To address this, I opened a
[PR](https://github.com/eminence/terminal-size/pull/68) for
`terminal-size` to conditionally compile `rustix` only when the target
is Unix. For now, the `Cargo.toml` includes patches to:
    - Use my forked version of `terminal-size`.
- ~~Use an unreleased version of `miette` that depends on
`terminal-size@0.4`.~~

These patches are temporary and can be removed once the upstream changes
are merged and released.

2. **Test Output Adjustments**
Due to the slight bump in the `miette` version, one test required
adjustments to accommodate minor formatting changes in the error output,
such as shifted newlines.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
This shouldn't break anything but allows using some crates for targeting
`wasm32-unknown-unknown` to revive the demo page eventually.

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

I did not add any extra tests, I just checked that compiling works, also
when using the host target but unselecting the `os` feature.

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
~~Breaking the wasm support can be easily done by adding some `use`s or
by adding a new dependency, we should definitely add some CI that also
at least builds against wasm to make sure that building for it keep
working.~~
I added a job to build wasm.

---------

Co-authored-by: Ian Manske <ian.manske@pm.me>
2024-11-30 07:57:11 -06:00
07a37f9b47 fix: Respect sort in custom completions (#14424)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description

This PR makes it so that when a custom completer sets `options.sort` to
true, completions aren't sorted. Previously, in #13311, I'd made it so
that setting `sort` to true would sort in alphabetical order, while
omitting it or setting it to false would sort it in the default order
for the chosen match algorithm (alphabetical for prefix matching, fuzzy
match score for fuzzy matching). I'd assumed that you'd always want to
sort completions and the important thing was choosing alphabetical
sorting vs the default sort order for your match algorithm. However,
this assumption was incorrect (see #13696 and [this
thread](https://discord.com/channels/601130461678272522/1302332259227144294)
in Discord).

An alternative would be to make `sort` accept `"alphabetical"`,
`"smart"`, and `"none"`/`null` rather than keeping it a boolean. But
that would be a breaking change and require more discussion, and I
wanted to keep this PR simple/small so that we can go back to the
sensible behavior as soon as possible.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

Here are the different scenarios:
- If your custom completer returns a record with an `options` field
that's a record:
- If `options` contains `sort: true`, completions **will be sorted
according to the order set in the user's config**. Previously, they
would have been sorted in alphabetical order. This does mean that
**custom completers cannot explicitly choose to sort in alphabetical
order** anymore. I think that's an acceptable trade-off, though.
- If `options` contains `sort: false`, completions will not be sorted.
#13311 broke things so they would be sorted in the default order for the
match algorithm used. Before that PR, completions would not have been
sorted.
- If there's no `sort` option, that **will be treated as `sort: true`**.
Previously, this would have been treated as `sort: false`.
- Otherwise, nothing changes. Completions will still be sorted.

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

Added 1 test to make sure that completions aren't sorted with `sort:
false` explicitly set.

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-11-29 20:47:57 -05:00
0172ad8461 Upgrading to polars 0.44 (#14478)
Upgrading to polars 0.44
2024-11-29 19:39:07 -06:00
e1f74a6d57 Add label rendering to try/catch rendered errors (#14477)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->



# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

Before this PR, you can access rendered error values that are raised in
a `try/catch` block by accessing the `rendered` element of the catch
error value:
```
$ try { ls nonexist.txt } catch {|e| print "my cool error:" $e.rendered }
my cool error:
nu:🐚:directory_not_found

  × Directory not found
  help: /home/rose/nonexist.txt does not exist
```

However, the rendered errors don't include the labels present in the
real rendered error, which would look like this:
```
$ ls nonexist.txt
Error: nu:🐚:directory_not_found

  × Directory not found
   ╭─[entry #46:1:4]
 1 │ ls nonexist.txt
   ·    ──────┬─────
   ·          ╰── directory not found
   ╰────
  help: /home/rose/nonexist.txt does not exist
```

After this PR, the rendered error includes the labels:

```
$ try { ls nonexist.txt } catch {|e| print "my cool error:" $e.rendered }
my cool error:
Error: nu:🐚:directory_not_found

  × Directory not found
   ╭─[entry #4:1:10]
 1 │ try { ls nonexist.txt } catch {|e| print "my cool error:" $e.rendered }
   ·          ──────┬─────
   ·                ╰── directory not found
   ╰────
  help: /home/rose/nonexist.txt does not exist
```

This change is accomplished by using the standard error formatting code
to render an error. This respects the error theme as before without any
extra scaffolding, but it means that e.g., the terminal size is also
respected. I think this is fine because the way the error is rendered
already changed based on config, and I think that a "rendered" error
should give back _exactly_ what would be shown to the user anyway.

@fdncred, let me know if you have any concerns with the way this is
handled since you were the one who implemented this feature in the first
place.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

The `rendered` element of the `try`/`catch` error record now includes
labels in the error output.

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`


# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
N/A
2024-11-29 19:02:26 -06:00
e17f6d654c Deprecate date to-record and date to-table (#14319)
# Description

Implements #11234 based on the comments there:

* (Previously implemented): `into record` handles nanoseconds (as well
as milliseconds and microseconds, which the deprecated commands didn't
support).
* Added deprecation warning to `date to-record` and `date to-table`
* Added new example for `into record` showing the conversion to a table
* Changed `std/dt` to use `into record`
* Added "Deprecated" category back to nu-protocol::Signature
* Assigned the deprecated commands to the Deprecated category so be
categorized properly in the online Doc.

# User-Facing Changes

Deprecated command warning

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting

Searched doc for existing uses of `date to-record` and `date to-table`:

* For primary English-language docs, there are no uses other than in the
auto-generated command help, which will be updated based on this PR
* Other language translations appear to have an old use in several
places and will need to be updated to match the English-language doc.
2024-11-29 23:06:26 +01:00
817830940b raise ParseError if assign to a non-variable or non-mutable-variable (#14405)
# Description
While reviewing #14388, I think we can make some improvement on parser.

For the following code:
```nushell
let a = 3
a = 10   # should be error
$a = 10 # another error
```
I think they can raise `ParseError`, so nushell doesn't need to move
forward compiling IR block.

# User-Facing Changes
```nushell
let a = 3
a = 10
```
Will raise parse error instead of compile error.

# Tests + Formatting
Added 1 test.
2024-11-29 23:02:21 +01:00
dc9e8161d9 Implement chunk_by operation (#14410)
# Description

This pull requests implements a new ~~partition-by~~ `chunk-by` command.
The operation takes a closure and partitions the input list into
sublists based on the return value of the closure.
- fixes #14149

Examples, tests and and documentation were added accordingly.


![image](https://github.com/user-attachments/assets/c272e2ec-9af3-4a88-832b-ddca4eb14c8f)


![image](https://github.com/user-attachments/assets/178968e7-c165-4d8c-858c-98584d653b0a)
2024-11-29 13:37:27 -08:00
7f61cbbfd6 Add Filesize type (#14369)
# Description
Adds a new `Filesize` type so that `FromValue` can be used to convert a
`Value::Filesize` to a `Filesize`. Currently, to extract a filesize from
a `Value` using `FromValue`, you have to extract an `i64` which coerces
`Value::Int`, `Value::Duration`, and `Value::Filesize` to an `i64`.

Having a separate type also allows us to enforce checked math to catch
overflows. Similarly, it allows us to specify other trait
implementations like `Display` in a common place.

# User-Facing Changes
Multiplication with filesizes now error on overflow. Should not be a
breaking change for plugins (i.e., serialization) since `Filesize` is
marked with `serde(transparent)`.

# Tests + Formatting
Updated some tests.
2024-11-29 21:24:17 +00:00
acca56f77c Remove unused FlatShapes And/Or (#14476)
# Description
This removes the need for the `shape_and` and `shape_or` entries in the
themes. We did not color those underlying FlatShapes or operators
differently.

Closes #14372
# User-Facing Changes
Our theme handling currently doesn't reject invalid entries so should
not cause an error. The non-functional nature was already documented.
2024-11-29 22:23:40 +01:00
6bc695f251 Make Hooks fields non-optional to match the new config defaults (#14345)
# Description
Follow up to #14341. Changes the fields of `Hooks` to `Vec` or `Hashmap`
to match the new config defaults.

# User-Facing Changes
Mostly the same as #14341. `pre_prompt` and `pre_execution` must now be
a list, and `env_change` must be a record.
2024-11-29 21:11:09 +00:00
91bb566ee6 udpate rust toolchain to rust 1.81.0 (#14473)
# Description

With the release of rust 1.83.0 it's time to update to rust 1.81.0.
2024-11-29 21:46:58 +01:00
5f04bbbb8b Make length only operate on supported input types (#14475)
# Description


Before this PR, `length` did not check its input type at run-time, so it
would attempt to calculate a length for any input with indeterminate
type (e.g., `echo` which has an `any` output type). This PR makes
`length` only work on the types specifically supported in its
input/output types (list/table, binary, and nothing), making the
behavior the same at parse-time and at run-time.

Fixes #14462

# User-Facing Changes


Length will error if passed an unsupported type:

Before (only caught at parse-time):
```nushell
"hello" | length
Error: nu::parser::input_type_mismatch

  × Command does not support string input.
   ╭─[entry #2:1:11]
 1 │ "hello" | length
   ·           ───┬──
   ·              ╰── command doesn't support string input
   ╰────

echo "hello" | length
# => 1
```

After (caught at parse-time and run-time):
```nushell
"hello" | length
Error: nu::parser::input_type_mismatch

  × Command does not support string input.
   ╭─[entry #22:1:11]
 1 │ "hello" | length
   ·           ───┬──
   ·              ╰── command doesn't support string input
   ╰────

echo "hello" | length
Error: nu:🐚:only_supports_this_input_type

  × Input type not supported.
   ╭─[entry #23:1:6]
 1 │ echo "hello" | length
   ·      ───┬───   ───┬──
   ·         │         ╰── only list, table, binary, and nothing input data is supported
   ·         ╰── input type: string
   ╰────
```
2024-11-29 21:45:27 +01:00
49fb5cb1a8 fix: sample_config (#14465)
path to sample_config

crates/nu-utils/src/sample_config ->
crates/nu-utils/src/default_files/sample_config.nu

---------

Co-authored-by: Darren Schroeder <343840+fdncred@users.noreply.github.com>
2024-11-29 08:06:14 -06:00
6e036ca09a update unicode-width to 0.2 (#14456)
# Description
When looking into #14395, I found that `unicode-width` from 0.1 to 0.2
contains a breaking change, the mainly change is it treats newlines as
width 1. So relative tests(str stats) are broken.
But I think it's ok to adjust the test.

# User-Facing Changes
The output of `str stats` might change if there are `\n` in the input.
### Before
```nushell
> "a\nb" | str stats | get unicode-width
2
```
### After
```nushell
> "a\nb" | str stats | get unicode-width
3
```
# Tests + Formatting
Adjusted 2 tests.

# After Submitting
NaN
2024-11-29 09:09:45 +08:00
8d1e36fa3c Allow inherited environment variables (#14467)
# Description

Due to #14249 loading `default_env.nu` before the user's `env.nu`,
variables that were defined there were overriding:

* Inherited values
* Some values that were set in the Rust code, such as the `NU_LIB_PATH`
when set using `--include-path`.

This change checks to see if a variable already exists, uses its value
if so, and sets the default value otherwise.

Note: `ENV_CONVERSIONS` is still "forced" to a default value regardless,
as it needs to run reliably. There's probably not much reason to inherit
it, but I'm open to the idea if there's a use-case.

# User-Facing Changes

* Before: Variables that were set in `default_env.nu` always overrode
those that were inherited from the parent process or set internally
* After: Inherited and internal environment variables will take
priority.

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting

Will try to find a good place to mention this behavior in the Config
chapter updates
2024-11-28 12:37:32 -06:00
bccff3b237 Update default-files README (#14461)
# Description

Someone noticed today that I had left a TODO in the Readme. It has since
been completed and needed to be removed. Also made some other minor
fixes and wordsmithing while I was in it.

# User-Facing Changes

None

# Tests + Formatting

Clippy and fmt passed, and that should be all that matters on the
Readme.

# After Submitting

N/A
2024-11-28 15:04:42 +08:00
a13a024ac8 update miette to 7.3 (#14454)
# Description
The test is failed when updating miette from 7.2 to 7.3. After looking
into the test, I think it's ok to adjust test.

# User-Facing Changes
For the given custom command:
```nushell
def force_error [ x: any ] {
    error make {
        msg: "oh no!"
        label: {
            text: "here's the error"
            span: (metadata $x).span
        }
    }
}
```
### Before
```
> force_error "My error"
Error:   × oh no!
   ╭─[entry #8:1:13]
 1 │ force_error "My error"
   ·             ─────┬────
   ·                  ╰── here's the error
   ╰────

```

### After
```
> force_error "My error"
Error:
  × oh no!
   ╭─[entry #9:1:13]
 1 │ force_error "My error"
   ·             ─────┬────
   ·                  ╰── here's the error
   ╰────
```
As we can see, the message `oh no!` is output in a new line, and there
is one less trailing line. I have makes some testing, and it seems that
it only happened on `error make` command.

# Tests + Formatting
Changed 1 test

# After Submitting
NaN
2024-11-27 22:43:36 +01:00
5e7263cd1a Bump reedline to current main (#14455)
# Description

@fdncred mentioned that we should be dogfooding the latest Reedline
changes in Nushell. Hoping I got the steps correct.

# User-Facing Changes

New keybindings for:

* Insert Newline: <kbd>Alt</kbd>+<kbd>Enter</kbd> and
<kbd>Shift</kbd>+<kbd>Enter</kbd>
* Enter:  <kbd>Ctrl</kbd>+<kbd>J</kbd>

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting
2024-11-27 23:42:30 +08:00
0aafc29fb5 Propagate existing errors in insert and merge (#14453)
# Description
Propagate existing errors in the pipeline, rather than a type error.

# User-Facing Changes
Nothing that previously worked should be affected, this should just
change the errors.

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
2024-11-27 06:37:21 -06:00
bd37473515 Fix unstable test case: One time my windows report drive letter as lowercase (#14451)
As I'm working on PWD-per-drive feature. Once the plugin test of env
failed. I checked the log, found sometime Windows can give drive letter
as lowercase, so the test case should be rewrite to check first letter
caseinsensitive equal, and following part normal equal.

```
assert_eq! failed at tests/plugins/env.rs:43:5
left: r"e:\Study\Nushell"
right: r"E:\Study\Nushell"
```

---------

Co-authored-by: Zhenping Zhao <pegasus.cadence@gmail.com>
2024-11-27 06:27:06 -06:00
1c18e37a7c Always populate config record during startup (#14435)
# Description

As a bit of a follow-on to #13802 and #14249, this (pretty much a
"one-line" change) really does *always* populate the `$env.config`
record with the `nu-protocol::config` defaults during startup. This
means that an `$env.config` record is value (with defaults) even during:

* `nu -n` to suppress loading of config files
* `nu -c <commandstring>`
* `nu <script>`

# User-Facing Changes

There should be no case in which there isn't a valid `$env.config`.

* Before:

  ```nushell
  nu -c "$env.config"
  # -> Error
  ```

* After:

  ```nushell
  nu -c "$env.config"
  # -> Default $env.config record
  ```

Startup time impact is negligible (17.072µs from `perf!` on my system) -
Seems well worth it.

# Tests + Formatting

Added tests for several `-n -c` cases.

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting

Config chapter update still in progress.
2024-11-27 13:52:47 +08:00
547c436281 add from ndnuon and to ndnuon to stdlib (#14334)
# Description
i was playing with the NDNUON format and using local definitions of
`from ndnuon` and `to ndnuon` but then i thought they could live in the
standard library next to `from ndjson` and `to ndjson` 😋

# User-Facing Changes
users can now add the following to their configs and get NDNUON ready to
go
```nushell
use std formats ["from ndnuon" "to ndnuon"]
```

# Tests + Formatting
i did simply mimic the tests for `from ndjson` and `to ndjson`, i hope
it's fine since the recent big change to the standard library

# After Submitting

---------

Co-authored-by: Douglas <32344964+NotTheDr01ds@users.noreply.github.com>
2024-11-27 09:43:49 +08:00
e0c0d39ede deprecate --ignore-shell-errors and --ignore-program-errors in do (#14385)
# Description
As title, this pr is going to deprecate `--ignore-shell-errors` and
`--ignore-program-errors`.

Because I think these two flags makes `do` command complicate, and it
should be easy to use `-i` instead.

# User-Facing Changes
After the pr, using these two flags will raise deprecated warning.
```nushell
> do --ignore-program-errors { ^pwd }
Error:   × Deprecated option
   ╭─[entry #2:1:1]
 1 │ do --ignore-program-errors { ^pwd }
   · ─┬
   ·  ╰── `--ignore-program-errors` is deprecated and will be removed in 0.102.0.
   ╰────
  help: Please use the `--ignore-errors(-i)`
/home/windsoilder/projects/nushell
> do --ignore-shell-errors { ^pwd }
Error:   × Deprecated option
   ╭─[entry #3:1:1]
 1 │ do --ignore-shell-errors { ^pwd }
   · ─┬
   ·  ╰── `--ignore-shell-errors` is deprecated and will be removed in 0.102.0.
   ╰────
  help: Please use the `--ignore-errors(-i)`
/home/windsoilder/projects/nushell
```

# Tests + Formatting
NaN
2024-11-27 09:36:30 +08:00
4edce44689 Remove ListStream type (#14425)
# Description
List values and list streams have the same type (`list<>`). Rather,
streaming is a separate property of the pipeline/command output. This PR
removes the unnecessary `ListStream` type.

# User-Facing Changes
Should be none, except `random dice` now has a more specific output
type.
2024-11-27 09:35:55 +08:00
186c08467f make std help more user friendly (#14347)
# Description
Fixes:  #13159

After the change, `std help` will no-longer print out "double error"
messages.

Actually I think it's tricky to make it right. To make `help <cmd>`
keeps paging feature from fallback `man` command. I have to split
`commands` into `scope-commands` and `external-commands`.

If we don't split it, simply call `let commands = (try { commands
$target_item --find $find })` in `help main` will cause us to lost
paging feature, which is not we want.

A comment from original issue:

> If there are no objections, I'd like to remove the man page fallback
code from std help for the moment. While it's probably fixable, it's
also platform specific and requires testing on all platforms. It also
seems like a low-value add here.

Actually I think it's a beautiful feature of `std help`, so I want to
keep it here.

# User-Facing Changes
### Before
```nushell
> help commands asdfadsf
Help pages from external command asdfadsf:
No manual entry for asdfadsf
Error:   × std::help::command_not_found
   ╭─[entry #11:1:15]
 1 │ help commands asdfadsf
   ·               ────┬───
   ·                   ╰── command not found
   ╰────
```

### After
```nushell
> help commands asdfasdf
Help pages from external command asdfasdf:
No manual entry for asdfasdf
```

# Tests + Formatting
Actually it's a little hard to add test because it required user input
(especially for fallback `man` command)
2024-11-27 09:29:25 +08:00
367fb9b504 Bump crate-ci/typos from 1.27.3 to 1.28.1 (#14447)
Bumps [crate-ci/typos](https://github.com/crate-ci/typos) from 1.27.3 to
1.28.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/crate-ci/typos/releases">crate-ci/typos's
releases</a>.</em></p>
<blockquote>
<h2>v1.28.1</h2>
<h2>[1.28.1] - 2024-11-26</h2>
<h3>Fixes</h3>
<ul>
<li>Add back in <code>lock</code> file types accidentally removed in
1.28 (<code>go.sum</code>, <code>requirements.txt</code>)</li>
</ul>
<h2>v1.28.0</h2>
<h2>[1.28.0] - 2024-11-25</h2>
<h3>Features</h3>
<ul>
<li>Updated the dictionary with the <a
href="https://redirect.github.com/crate-ci/typos/issues/1139">November
2024</a> changes</li>
<li>Add many new types and file extensions to the
<code>--type-list</code>, including ada, alire, bat, candid, carp, cml,
devicetree, dita, dockercompose, grpbuild, graphql, hare, lean, meson,
prolog, raku, reasonml, rescript, solidity, svelte, usd, v, wgsl</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/crate-ci/typos/blob/master/CHANGELOG.md">crate-ci/typos's
changelog</a>.</em></p>
<blockquote>
<h2>[1.28.1] - 2024-11-26</h2>
<h3>Fixes</h3>
<ul>
<li>Add back in <code>lock</code> file types accidentally removed in
1.28 (<code>go.sum</code>, <code>requirements.txt</code>)</li>
</ul>
<h2>[1.28.0] - 2024-11-25</h2>
<h3>Features</h3>
<ul>
<li>Updated the dictionary with the <a
href="https://redirect.github.com/crate-ci/typos/issues/1139">November
2024</a> changes</li>
<li>Add many new types and file extensions to the
<code>--type-list</code>, including ada, alire, bat, candid, carp, cml,
devicetree, dita, dockercompose, grpbuild, graphql, hare, lean, meson,
prolog, raku, reasonml, rescript, solidity, svelte, usd, v, wgsl</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="bd36f89fcd"><code>bd36f89</code></a>
chore: Release</li>
<li><a
href="9b3917ceee"><code>9b3917c</code></a>
docs: Update changelog</li>
<li><a
href="8a0dae6793"><code>8a0dae6</code></a>
Merge pull request <a
href="https://redirect.github.com/crate-ci/typos/issues/1159">#1159</a>
from epage/go</li>
<li><a
href="70f236086f"><code>70f2360</code></a>
fix: Re-add go.sum, requirements.txt to 'lock' file type</li>
<li><a
href="78d6d22744"><code>78d6d22</code></a>
chore: Release</li>
<li><a
href="e75389e6fb"><code>e75389e</code></a>
chore: Release</li>
<li><a
href="cd4f2950fc"><code>cd4f295</code></a>
docs: Update changelog</li>
<li><a
href="bdcd9c54e3"><code>bdcd9c5</code></a>
Merge pull request <a
href="https://redirect.github.com/crate-ci/typos/issues/1157">#1157</a>
from epage/dict</li>
<li><a
href="31e5b4f5b5"><code>31e5b4f</code></a>
feat(dict): November updates</li>
<li><a
href="ea6fdd1371"><code>ea6fdd1</code></a>
Merge pull request <a
href="https://redirect.github.com/crate-ci/typos/issues/1154">#1154</a>
from dseight/master</li>
<li>Additional commits viewable in <a
href="https://github.com/crate-ci/typos/compare/v1.27.3...v1.28.1">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=crate-ci/typos&package-manager=github_actions&previous-version=1.27.3&new-version=1.28.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-27 09:27:47 +08:00
ac75562296 Remove long-unused autoenv tests (#14436)
# Description

The `.nu-env` file feature was removed some time ago (probably in the
engine-q upgrade?). The tests, however, still remained as dead-code, so
this is just some basic clean-up.

If this feature was ever implemented again, the tests would need to be
rewritten anyway due to the changes in the way config is handled.

# User-Facing Changes

None

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
- 
# After Submitting

N/A
2024-11-26 10:22:18 +08:00
7a9b14b49d Add example for PROMPT_COMMAND_RIGHT (#14439)
# Description

I just completely left out `$env.PROMPT_COMMAND_RIGHT` in the
`sample_env.nu`. This adds it in.

# User-Facing Changes

`config env --sample` will now include doc for `PROMPT_COMMAND_RIGHT`.

# Tests + Formatting

Doc-only

# After Submitting

n/a
2024-11-25 19:25:43 -06:00
32196cfe78 Add term query, for querying information from terminals. (#14427)
## Related
- #10150
- https://github.com/nushell/nushell/pull/10150#issuecomment-1721238336
- #10387
- https://github.com/nushell/nushell/pull/10387#issuecomment-1722228185

# Description
`term query`: a command for querying information from the terminal.

Prints the `$query`, and immediately starts reading raw bytes from
stdin.

The standard input will be read until the `terminator` sequence is
encountered.
The `terminator` is not removed from the output.

It also stops on <kbd>Ctrl-C</kbd> with an error.

```
Usage:
  > term query {flags} <query> 

Flags:
  -h, --help: Display the help message for this command
  -t, --terminator (required parameter) <one_of(binary, string)>: stdin will be read until this sequence is encountered

Parameters:
  query <one_of(binary, string)>: The query that will be printed to stdout
```

This was previously possible with `input` until #10150.
`input` command's features such as cursor control, deleting input etc.
are useful, but interfere with this use case.

`term query` makes the following uses possible:

```nushell
# get the terminal size with ansi escape codes
def terminal-size [] {
    let response = term query (ansi size) --terminator 'R'
    # $response should look like this
    # Length: 9 (0x9) bytes | printable whitespace ascii_other non_ascii
    # 00000000:   1b 5b 33 38  3b 31 35 30  52             •[38;150R

    let sz = $response | bytes at 2..<-1 | decode
    # 38;150

    # $sz should look like 38;150
    let size = ($sz | split row ';' | each {into int})

    # output in record syntax
    {
        rows: $size.0
        columns: $size.1
    }
}
```

```nushell
# read clipboard content using OSC 52
term query $"(ansi --osc '52;c;?')(ansi st)" --terminator (ansi st)
| bytes at 7..<-2
| decode
| decode base64
| decode
```

# User-Facing Changes
- added `ansi query`

# Tests + Formatting
- Integration tests should be added if possible.
2024-11-25 15:13:11 -06:00
4d3283e235 Change append operator to concatenation operator (#14344)
# Description

The "append" operator currently serves as both the append operator and
the concatenation operator. This dual role creates ambiguity when
operating on nested lists.

```nu
[1 2] ++ 3     # appends a value to a list [1 2 3]
[1 2] ++ [3 4] # concatenates two lists    [1 2 3 4]

[[1 2] [3 4]] ++ [5 6]
# does this give [[1 2] [3 4] [5 6]]
# or             [[1 2] [3 4] 5 6]  
```

Another problem is that `++=` can change the type of a variable:
```nu
mut str = 'hello '
$str ++= ['world']
($str | describe) == list<string>
```

Note that appending is only relevant for lists, but concatenation is
relevant for lists, strings, and binary values. Additionally, appending
can be expressed in terms of concatenation (see example below). So, this
PR changes the `++` operator to only perform concatenation.

# User-Facing Changes

Using the `++` operator with a list and a non-list value will now be a
compile time or runtime error.
```nu
mut list = []
$list ++= 1 # error
```
Instead, concatenate a list with one element:
```nu
$list ++= [1]
```
Or use `append`:
```nu
$list = $list | append 1
```

# After Submitting

Update book and docs.

---------

Co-authored-by: Douglas <32344964+NotTheDr01ds@users.noreply.github.com>
2024-11-24 10:59:54 -08:00
dd3a3a2717 remove terminal_size crate everywhere it makes sense (#14423)
# Description

This PR removes the `terminal_size` crate everywhere that it made sense.
I replaced it with crossterm's version called `size`. The places I
didn't remove it were the places that did not have a dependency on
crossterm. So, I thought it was "cheaper" to have a dep on term_size vs
crossterm in those locations.
2024-11-23 19:37:12 -08:00
83d8e936ad Fix small typos in std/dirs (#14422)
# Description

Typos in the command doc-help.
2024-11-23 16:04:27 -06:00
58576630db command/http/client use CRLF for headers join instead of LF (#14417)
# Description
Apparently it should be joint CRLF for the EOL marker

https://www.rfc-editor.org/rfc/rfc2616#section-2.2

Plain LF isn't particularly standardized and many backends don't
recognize it. Tested on `starlette`

# User-Facing Changes
None

# Tests + Formatting
It's two characters; everything passes

# After Submitting
Not needed
2024-11-23 13:49:25 -08:00
7c84634e3f return accurate type errors from blocks/expressions in type unions (#14420)
# User-Facing Changes

- `expected <type>` errors are now propagated from
  `Closure | Block | Expression` instead of falling back to
  "expected one of..." for the block:

Before:

```nushell
def foo [bar: bool] {}
if true {} else { foo 1 }
                ────┬────
                    ╰── expected one of a list of accepted shapes: [Block, Expression]
```

After:

```nushell
if true {} else { foo 1 }
                      ┬
                      ╰── expected bool
```
2024-11-23 13:42:00 -08:00
671640b0a9 Avoid recomputing fuzzy match scores (#13700)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

This PR makes it so that when using fuzzy matching, the score isn't
recomputed when sorting. Instead, filtering and sorting suggestions is
handled by a new `NuMatcher` struct. This struct accepts suggestions
and, if they match the user's typed text, stores those suggestions
(along with their scores and values). At the end, it returns a sorted
list of suggestions.

This probably won't have a noticeable impact on performance, but it
might be helpful if we start using Nucleo in the future.

Minor change: Makes `find_commands_by_predicate` in `StateWorkingSet`
and `EngineState` take `FnMut` rather than `Fn` for the predicate.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

When using case-insensitive matching, if you have two matches `FOO` and
`abc`, `abc` will be shown before `FOO` rather than the other way
around. I think this way makes more sense than the current behavior.
When I brought this up on Discord, WindSoilder did say it would make
sense to show uppercase matches first if the user typed, say, `F`.
However, that would be a lot more complicated to implement.

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

Added a test for the changes in
https://github.com/nushell/nushell/pull/13302.

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-11-22 06:29:00 -06:00
5f7082f053 truly flexible csv/tsv parsing (#14399)
- fixes #14398

I will properly fill out this PR and fix any tests that might break when
I have the time, this was a quick fix.

# Description

This PR makes `from csv` and `from tsv`, with the `--flexible` flag,
stop dropping extra/unexpected columns.

# User-Facing Changes

`$text`'s contents
```csv
value
1,aaa
2,bbb
3
4,ddd
5,eee,extra
```

Old behavior
```nushell
> $text | from csv --flexible --noheaders 
╭─#─┬─column0─╮
│ 0 │ value   │
│ 1 │       1 │
│ 2 │       2 │
│ 3 │       3 │
│ 4 │       4 │
│ 5 │       5 │
╰─#─┴─column0─╯
```

New behavior
```nushell
> $text | from csv --flexible --noheaders 
╭─#─┬─column0─┬─column1─┬─column2─╮
│ 0 │ value   │       │
│ 1 │       1 │ aaa     │       │
│ 2 │       2 │ bbb     │       │
│ 3 │       3 │       │
│ 4 │       4 │ ddd     │       │
│ 5 │       5 │ eee     │ extra   │
╰─#─┴─column0─┴─column1─┴─column2─╯
```

- The first line in a csv (or tsv) document no longer limits the number
of columns
- Missing values in columns are longer automatically filled with `null`
with this change, as a later row can introduce new columns. **BREAKING
CHANGE**

Because missing columns are different from empty columns, operations on
possibly missing columns will have to use optional access syntax e.g.
`get foo` => `get foo?`
  
# Tests + Formatting
Added examples that run as tests and adjusted existing tests to confirm
the new behavior.

# After Submitting

Update the workaround with fish completer mentioned
[here](https://www.nushell.sh/cookbook/external_completers.html#fish-completer)
2024-11-21 15:58:31 -06:00
2a90cb7355 Update SHLVL (only when interactive) on startup (#14404)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

Make NuShell correctly inherit and update `SHLVL` from other shells
(obviously including itself) in Unix environment.

See issue #14384

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

None

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

New code formatted.

New feature works well in interactive usage.

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-11-21 15:57:33 -06:00
e63976df7e Bump Calamine (#14403)
This commit upgrades calamine in order to benefit from recent
developments, e.g. ignore annotations in column headers (see
https://github.com/tafia/calamine/pull/467 for reference).
2024-11-21 20:31:14 +08:00
d8c2493658 Deprecate split-by command (#14019)
# Description
I'm not quite sure what the point of the `split-by` command is. The only
example for the command seems to suggest it's an additional grouping
command. I.e., a record that seems to be the output of the `group-by`
command is passed to `split-by` which then adds an additional layer of
grouping based on a different column.

# User-Facing Changes
Breaking change, deprecated the command.
2024-11-21 10:47:03 +01:00
4ed25b63a6 Always load default env/config values (#14249)
# Release-Notes Short Description

* Nushell now always loads its internal `default_env.nu` before the user
`env.nu` is loaded, then loads the internal `default_config.nu` before
the user's `config.nu` is loaded. This allows for a simpler
user-configuration experience. The Configuration Chapter of the Book
will be updated soon with the new behavior.

# Description

Implements the main ideas in #13671 and a few more:

* Users can now specify only the environment and config options they
want to override in *their* `env.nu` and `config.nu`and yet still have
access to all of the defaults:
* `default_env.nu` (internally defined) will be loaded whenever (and
before) the user's `env.nu` is loaded.
* `default_config.nu` (internally defined) will be loaded whenever (and
before) the user's `config.nu` is loaded.
* No more 900+ line config out-of-the-box.
* Faster startup (again): ~40-45% improvement in launch time with a
default configuration.
* New keys that are added to the defaults in the future will
automatically be available to all users after updating Nushell. No need
to regenerate config to get the new defaults.
* It is now possible to have different internal defaults (which will be
used with `-c` and scripts) vs. REPL defaults. This would have solved
many of the user complaints about the [`display_errors`
implementation](https://www.nushell.sh/blog/2024-09-17-nushell_0_98_0.html#non-zero-exit-codes-are-now-errors-toc).
* A basic "scaffold" `config.nu` and `env.nu` are created on first
launch (if the config directory isn't present).
* Improved "out-of-the-box" experience (OOBE) - No longer asks to create
the files; the minimal scaffolding will be automatically created. If
deleted, they will not be regenerated. This provides a better
"out-of-the-box" experience for the user as they no longer have to make
this decision (without much info on the pros or cons) when first
launching.
* <s>(New: 2024-11-07) Runs the env_conversions process after the
`default_env.nu` is loaded so that users can treat `Path`/`PATH` as
lists in their own config.</s>
* (New: 2024-11-08) Given the changes in #13802, `default_config.nu`
will be a minimal file to minimize load-times. This shaves another (on
my system) ~3ms off the base launch time.
* Related: Keybindings, menus, and hooks that are already internal
defaults are no longer duplicated in `$env.config`. The documentation
will be updated to cover these scenarios.
* (New: 2024-11-08) Move existing "full" `default_config.nu` to
`sample_config.nu` for short-term "documentation" purposes.
* (New: 2024-11-18) Move the `dark-theme` and `light-theme` to Standard
Library and demonstrate their use - Also improves startup times, but
we're reaching the limit of optimization.
* (New: 2024-11-18) Extensively documented/commented `sample_env.nu` and
`sample_config.nu`. These can be displayed in-shell using (for example)
`config nu --sample | nu-highlight | less -R`. Note: Much of this will
eventually be moved to or (some) duplicated in the Doc. But for now,
this some nice in-shell doc that replaces the older
"commented/documented default".
* (New: 2024-11-20) Runs the `ENV_CONVERSIONS` process (1) after the
`default_env.nu` (allows `PATH` to be used as a list in user's `env.nu`)
and (2) before `default_config.nu` is loaded (allows user's
`ENV_CONVERSIONS` from their `env.nu` to be used in their `config.nu`).
* <s>(New: 2024-11-20) The default `ENV_CONVERSIONS` is now an empty
record. The internal Rust code handles `PATH` (and variants) conversions
regardless of the `ENV_CONVERSIONS` variable. This shaves a *very* small
amount of time off the startup.</s> Reset - Looks like there might be a
bug in `nu-enginer::env::ensure_path()` on Windows that would need to be
fixed in order for this to work.

# User-Facing Changes

By default, you shouldn't see much, if any, change when running this
with your existing configuration.

To see the greatest benefit from these changes, you'll probably want to
start with a "fresh" config. This can be easily tested using something
like:

```nushell
let temp_home = (mktemp -d)
$env.XDG_CONFIG_HOME = $temp_home
$env.XDG_DATA_HOME = $temp_home
./target/release/nu
```

You should see a message where the (mostly empty) `env.nu` and
`config.nu` are created on first start. Defaults should be the same (or
similar to) those before the PR. Please let me know if you notice any
differences.

---

Users should now specify configuration in terms of overrides of each
setting. For instance, rather than modifying `history` settings in the
monolithic `config.nu`, the following is recommended in an updated
`config.nu`:

```nu
$env.config.history = {
  file_format: sqlite,
  sync_on_enter: true
  isolation: true
  max_size: 1_000_000
}
```

or even just:

```nu
$env.config.history.file_format = sqlite
$env.config.history.isolation: true
$env.config.history.max_size = 1_000_000
```

Note: It seems many users are already appending a `source my_config.nu`
(or similar pattern) to the end of the existing `config.nu` to make
updates easier. In this case, they will likely want to remove all of the
previous defaults and just move their `my_config.nu` to `config.nu`.

Note: It should be unlikely that there are any breaking changes here,
but there's a slim chance that some code, somewhere, *expects* an
absence of certain config values. Otherwise, all config values are
available before and after this change.

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting

Configuration Chapter (and related) of the doc is currently WIP and will
be finished in time for 0.101 release.
2024-11-20 16:15:15 -06:00
b318d588fe add new --flatten parameter to the ast command (#14400)
# Description

By request, this PR introduces a new `--flatten` parameter to the ast
command for generating a more readable version of the AST output. This
enhancement improves usability by allowing users to easily visualize the
structure of the AST.


![image](https://github.com/user-attachments/assets/a66644ef-5fff-4d3d-a334-4e9f80edb39d)

```nushell
❯ ast 'ls | sort-by type name -i' --flatten --json
[
  {
    "content": "ls",
    "shape": "shape_internalcall",
    "span": {
      "start": 0,
      "end": 2
    }
  },
  {
    "content": "|",
    "shape": "shape_pipe",
    "span": {
      "start": 3,
      "end": 4
    }
  },
  {
    "content": "sort-by",
    "shape": "shape_internalcall",
    "span": {
      "start": 5,
      "end": 12
    }
  },
  {
    "content": "type",
    "shape": "shape_string",
    "span": {
      "start": 13,
      "end": 17
    }
  },
  {
    "content": "name",
    "shape": "shape_string",
    "span": {
      "start": 18,
      "end": 22
    }
  },
  {
    "content": "-i",
    "shape": "shape_flag",
    "span": {
      "start": 23,
      "end": 25
    }
  }
]
❯ ast 'ls | sort-by type name -i' --flatten --json --minify
[{"content":"ls","shape":"shape_internalcall","span":{"start":0,"end":2}},{"content":"|","shape":"shape_pipe","span":{"start":3,"end":4}},{"content":"sort-by","shape":"shape_internalcall","span":{"start":5,"end":12}},{"content":"type","shape":"shape_string","span":{"start":13,"end":17}},{"content":"name","shape":"shape_string","span":{"start":18,"end":22}},{"content":"-i","shape":"shape_flag","span":{"start":23,"end":25}}]
```
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-11-20 11:39:15 -06:00
42d2adc3e0 allow ps1 files to be executed without pwsh/powershell -c file.ps1 (#14379)
# Description

This PR allows nushell to run powershell scripts easier. You can already
do `powershell -c script.ps1` but this PR takes it a step further by
doing the `powershell -c` part for you. So, if you have script.ps1 you
can execute it by running it in the command position of the repl.

![image](https://github.com/user-attachments/assets/0661a746-27d9-4d21-b576-c244ff7fab2b)

or once it's in json, just consume it with nushell.

![image](https://github.com/user-attachments/assets/38f5c5d8-3659-41f0-872b-91a14909760b)

# User-Facing Changes
Easier to run powershell scripts. It should work on Windows with
powershell.exe.

# Tests + Formatting
Added 1 test

# After Submitting


---------

Co-authored-by: Wind <WindSoilder@outlook.com>
2024-11-20 21:55:26 +08:00
5d1eb031eb Turn compile errors into fatal errors (#14388)
# Description

Because the IR compiler was previously optional, compile errors were not
treated as fatal errors, and were just logged like parse warnings are.
This unfortunately meant that if a user encountered a compile error,
they would see "Can't evaluate block in IR mode" as the actual error in
addition to (hopefully) logging the compile error.

This changes compile errors to be treated like parse errors so that they
show up as the last error, helping users understand what's wrong a
little bit more easily.

Fixes #14333.

# User-Facing Changes
- Shouldn't see "Can't evaluate block in IR mode"
- Should only see compile error
- No evaluation should happen

# Tests + Formatting
Didn't add any tests specifically for this, but it might be good to have
at least one that checks to ensure the compile error shows up and the
"can't evaluate" error does not.
2024-11-20 19:24:03 +08:00
1e7840c376 Bump terminal_size from 0.3.0 to 0.4.0 (#14393)
Bumps [terminal_size](https://github.com/eminence/terminal-size) from
0.3.0 to 0.4.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/eminence/terminal-size/releases">terminal_size's
releases</a>.</em></p>
<blockquote>
<h2>v0.4.0</h2>
<h2>Breaking changes</h2>
<p>The big change in this release is the API change in <a
href="https://redirect.github.com/eminence/terminal-size/issues/66">#66</a>:</p>
<ul>
<li>If you were using the <code>terminal_size_using_fd</code> or
<code>terminal_size_using_handle</code> functions, these are now
deprecated and unsafe. Instead you should use the
<code>terminal_size_of</code> function, which does the same thing but is
safer.</li>
</ul>
<h2>What's Changed</h2>
<ul>
<li>Add <code>rust-version</code> in Cargo.toml by <a
href="https://github.com/cgwalters"><code>@​cgwalters</code></a> in <a
href="https://redirect.github.com/eminence/terminal-size/pull/60">eminence/terminal-size#60</a></li>
<li>Update <code>windows-sys</code> to 0.52 by <a
href="https://github.com/barrbrain"><code>@​barrbrain</code></a> in <a
href="https://redirect.github.com/eminence/terminal-size/pull/62">eminence/terminal-size#62</a></li>
<li>Update windows-sys to 0.59 by <a
href="https://github.com/eminence"><code>@​eminence</code></a> in <a
href="https://redirect.github.com/eminence/terminal-size/pull/67">eminence/terminal-size#67</a></li>
<li>Update the API for I/O safety by <a
href="https://github.com/sunfishcode"><code>@​sunfishcode</code></a> in
<a
href="https://redirect.github.com/eminence/terminal-size/pull/66">eminence/terminal-size#66</a></li>
<li>Fix typo, link to docs, update docs by <a
href="https://github.com/waywardmonkeys"><code>@​waywardmonkeys</code></a>
in <a
href="https://redirect.github.com/eminence/terminal-size/pull/63">eminence/terminal-size#63</a></li>
<li>Update CI: Use current actions, remove unused build step by <a
href="https://github.com/waywardmonkeys"><code>@​waywardmonkeys</code></a>
in <a
href="https://redirect.github.com/eminence/terminal-size/pull/64">eminence/terminal-size#64</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/cgwalters"><code>@​cgwalters</code></a>
made their first contribution in <a
href="https://redirect.github.com/eminence/terminal-size/pull/60">eminence/terminal-size#60</a></li>
<li><a href="https://github.com/barrbrain"><code>@​barrbrain</code></a>
made their first contribution in <a
href="https://redirect.github.com/eminence/terminal-size/pull/62">eminence/terminal-size#62</a></li>
<li><a
href="https://github.com/waywardmonkeys"><code>@​waywardmonkeys</code></a>
made their first contribution in <a
href="https://redirect.github.com/eminence/terminal-size/pull/63">eminence/terminal-size#63</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/eminence/terminal-size/compare/v0.3.0...v0.4.0">https://github.com/eminence/terminal-size/compare/v0.3.0...v0.4.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="f6b81b5714"><code>f6b81b5</code></a>
Bump to version 0.4.0</li>
<li><a
href="5cbc616cf3"><code>5cbc616</code></a>
Merge pull request <a
href="https://redirect.github.com/eminence/terminal-size/issues/64">#64</a>
from waywardmonkeys/update-ci</li>
<li><a
href="68ceb8dc9d"><code>68ceb8d</code></a>
Merge pull request <a
href="https://redirect.github.com/eminence/terminal-size/issues/63">#63</a>
from waywardmonkeys/fix-typo</li>
<li><a
href="53077475d5"><code>5307747</code></a>
Merge pull request <a
href="https://redirect.github.com/eminence/terminal-size/issues/66">#66</a>
from sunfishcode/main</li>
<li><a
href="a29b904580"><code>a29b904</code></a>
Mark <code>terminal_size_using_handle</code> as unsafe too.</li>
<li><a
href="ea92388054"><code>ea92388</code></a>
Mark <code>terminal_size_using_fd</code> as unsafe.</li>
<li><a
href="78e81fa487"><code>78e81fa</code></a>
Merge pull request <a
href="https://redirect.github.com/eminence/terminal-size/issues/67">#67</a>
from eminence/windows-sys</li>
<li><a
href="c69ff4e55f"><code>c69ff4e</code></a>
Update windows-sys to 0.59</li>
<li><a
href="76b0caeb6f"><code>76b0cae</code></a>
Merge pull request <a
href="https://redirect.github.com/eminence/terminal-size/issues/62">#62</a>
from barrbrain/windows-sys</li>
<li><a
href="56334c3cea"><code>56334c3</code></a>
Update the API for I/O safety</li>
<li>Additional commits viewable in <a
href="https://github.com/eminence/terminal-size/compare/v0.3.0...v0.4.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=terminal_size&package-manager=cargo&previous-version=0.3.0&new-version=0.4.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-20 09:20:47 +08:00
a6e3470c6f Bump thiserror from 1.0.69 to 2.0.3 (#14394)
Bumps [thiserror](https://github.com/dtolnay/thiserror) from 1.0.69 to
2.0.3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/dtolnay/thiserror/releases">thiserror's
releases</a>.</em></p>
<blockquote>
<h2>2.0.3</h2>
<ul>
<li>Support the same Path field being repeated in both Debug and Display
representation in error message (<a
href="https://redirect.github.com/dtolnay/thiserror/issues/383">#383</a>)</li>
<li>Improve error message when a format trait used in error message is
not implemented by some field (<a
href="https://redirect.github.com/dtolnay/thiserror/issues/384">#384</a>)</li>
</ul>
<h2>2.0.2</h2>
<ul>
<li>Fix hang on invalid input inside #[error(...)] attribute (<a
href="https://redirect.github.com/dtolnay/thiserror/issues/382">#382</a>)</li>
</ul>
<h2>2.0.1</h2>
<ul>
<li>Support errors that contain a dynamically sized final field (<a
href="https://redirect.github.com/dtolnay/thiserror/issues/375">#375</a>)</li>
<li>Improve inference of trait bounds for fields that are interpolated
multiple times in an error message (<a
href="https://redirect.github.com/dtolnay/thiserror/issues/377">#377</a>)</li>
</ul>
<h2>2.0.0</h2>
<h2>Breaking changes</h2>
<ul>
<li>
<p>Referencing keyword-named fields by a raw identifier like
<code>{r#type}</code> inside a format string is no longer accepted;
simply use the unraw name like <code>{type}</code> (<a
href="https://redirect.github.com/dtolnay/thiserror/issues/347">#347</a>)</p>
<p>This aligns thiserror with the standard library's formatting macros,
which gained support for implicit argument capture later than the
release of this feature in thiserror 1.x.</p>
<pre lang="rust"><code>#[derive(Error, Debug)]
#[error(&quot;... {type} ...&quot;)]  // Before: {r#type}
pub struct Error {
    pub r#type: Type,
}
</code></pre>
</li>
<li>
<p>Trait bounds are no longer inferred on fields whose value is shadowed
by an explicit named argument in a format message (<a
href="https://redirect.github.com/dtolnay/thiserror/issues/345">#345</a>)</p>
<pre lang="rust"><code>// Before: impl&lt;T: Octal&gt; Display for
Error&lt;T&gt;
// After: impl&lt;T&gt; Display for Error&lt;T&gt;
#[derive(Error, Debug)]
#[error(&quot;{thing:o}&quot;, thing = &quot;...&quot;)]
pub struct Error&lt;T&gt; {
    thing: T,
}
</code></pre>
</li>
<li>
<p>Tuple structs and tuple variants can no longer use numerical
<code>{0}</code> <code>{1}</code> access at the same time as supplying
extra positional arguments for a format message, as this makes it
ambiguous whether the number refers to a tuple field vs a different
positional arg (<a
href="https://redirect.github.com/dtolnay/thiserror/issues/354">#354</a>)</p>
<pre lang="rust"><code>#[derive(Error, Debug)]
#[error(&quot;ambiguous: {0} {}&quot;, $N)]
// ^^^ Not allowed, use #[error(&quot;... {0} {n}&quot;, n = $N)]
pub struct TupleError(i32);
</code></pre>
</li>
<li>
<p>Code containing invocations of thiserror's <code>derive(Error)</code>
must now have a direct dependency on the <code>thiserror</code> crate
regardless of the error data structure's contents (<a
href="https://redirect.github.com/dtolnay/thiserror/issues/368">#368</a>,
<a
href="https://redirect.github.com/dtolnay/thiserror/issues/369">#369</a>,
<a
href="https://redirect.github.com/dtolnay/thiserror/issues/370">#370</a>,
<a
href="https://redirect.github.com/dtolnay/thiserror/issues/372">#372</a>)</p>
</li>
</ul>
<h2>Features</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="15fd26e476"><code>15fd26e</code></a>
Release 2.0.3</li>
<li><a
href="7046023130"><code>7046023</code></a>
Simplify how has_bonus_display is accumulated</li>
<li><a
href="9cc1d0b251"><code>9cc1d0b</code></a>
Merge pull request <a
href="https://redirect.github.com/dtolnay/thiserror/issues/384">#384</a>
from dtolnay/nowrap</li>
<li><a
href="1d040f358a"><code>1d040f3</code></a>
Use Var wrapper only for Pointer formatting</li>
<li><a
href="6a6132d79b"><code>6a6132d</code></a>
Extend no-display ui test to cover another fmt trait</li>
<li><a
href="a061beb9dc"><code>a061beb</code></a>
Merge pull request <a
href="https://redirect.github.com/dtolnay/thiserror/issues/383">#383</a>
from dtolnay/both</li>
<li><a
href="63882935be"><code>6388293</code></a>
Support Display and Debug of same path in error message</li>
<li><a
href="dc0359eeec"><code>dc0359e</code></a>
Defer binding_value construction</li>
<li><a
href="520343e37d"><code>520343e</code></a>
Add test of Debug and Display of paths</li>
<li><a
href="49be39dee1"><code>49be39d</code></a>
Release 2.0.2</li>
<li>Additional commits viewable in <a
href="https://github.com/dtolnay/thiserror/compare/1.0.69...2.0.3">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=thiserror&package-manager=cargo&previous-version=1.0.69&new-version=2.0.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-20 09:19:37 +08:00
582b5f45e8 Bump shadow-rs from 0.35.2 to 0.36.0 (#14396)
Bumps [shadow-rs](https://github.com/baoyachi/shadow-rs) from 0.35.2 to
0.36.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/baoyachi/shadow-rs/releases">shadow-rs's
releases</a>.</em></p>
<blockquote>
<h2>v0.36.0</h2>
<h2>What's Changed</h2>
<ul>
<li>feat(HookExt): Add extended hook functionality with custom deny
lists by <a
href="https://github.com/baoyachi"><code>@​baoyachi</code></a> in <a
href="https://redirect.github.com/baoyachi/shadow-rs/pull/190">baoyachi/shadow-rs#190</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/baoyachi/shadow-rs/compare/v0.35.2...v0.36.0">https://github.com/baoyachi/shadow-rs/compare/v0.35.2...v0.36.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="909510eb5d"><code>909510e</code></a>
Merge pull request <a
href="https://redirect.github.com/baoyachi/shadow-rs/issues/190">#190</a>
from baoyachi/hook_ext</li>
<li><a
href="bad046d7a0"><code>bad046d</code></a>
Update Cargo.toml</li>
<li><a
href="84096a02c0"><code>84096a0</code></a>
feat(HookExt): Add extended hook functionality with custom deny
lists</li>
<li>See full diff in <a
href="https://github.com/baoyachi/shadow-rs/compare/v0.35.2...v0.36.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=shadow-rs&package-manager=cargo&previous-version=0.35.2&new-version=0.36.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-20 09:19:24 +08:00
eb0b6c87d6 Add mac and IP address entries to sys net (#14389)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

What it says on the tin, this change adds the `mac` and `ip` columns to
the `sys net` command, where `mac` is the interface mac address and `ip`
is a record containing ipv4 and ipv6 addresses as well as whether or not
the address is loopback and multicast. I thought it might be useful to
have this information available in Nushell. This change basically just
pulls extra information out of the underlying structs in the
`sysinfo::Networks` struct. Here's a screenshot from my system:

![Screenshot from 2024-11-19
11-59-54](https://github.com/user-attachments/assets/92c2d72c-b0d0-49c0-8167-9e1ce853acf1)


# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

- Adds `mac` and `ip` columns to the `sys net` command, where `mac`
contains the interface's mac address and `ip` contains information
extracted from the `std::net::IpAddr` struct, including address,
protocol, whether or not the address is loopback, and whether or not
it's multicast

# Tests + Formatting
Didn't add any tests specifically, didn't seem like there were any
relevant tests. Ran existing tests and formatting.

<!--
Don't forget to add tests that cover your changes. 

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
2024-11-19 16:20:52 -06:00
b6ce907928 nu-table/ Do footer_inheritance by accouting for rows rather then a f… (#14380)
So it's my take on the comments in #14060 

The change could be seen in this test.
Looks like it works :) but I haven't done a lot of testing.


0b1af77415/crates/nu-command/tests/commands/table.rs (L3032-L3062)

```nushell
$env.config.table.footer_inheritance = true;
$env.config.footer_mode = 7;
[[a b]; ['kv' {0: [[field]; [0] [1] [2] [3] [4] [5]]} ], ['data' 0], ['data' 0] ] | table --expand --width=80
```

```text
╭───┬──────┬───────────────────────╮
│ # │  a   │           b           │
├───┼──────┼───────────────────────┤
│ 0 │ kv   │ ╭───┬───────────────╮ │
│   │      │ │   │ ╭───┬───────╮ │ │
│   │      │ │ 0 │ │ # │ field │ │ │
│   │      │ │   │ ├───┼───────┤ │ │
│   │      │ │   │ │ 0 │     0 │ │ │
│   │      │ │   │ │ 1 │     1 │ │ │
│   │      │ │   │ │ 2 │     2 │ │ │
│   │      │ │   │ │ 3 │     3 │ │ │
│   │      │ │   │ │ 4 │     4 │ │ │
│   │      │ │   │ │ 5 │     5 │ │ │
│   │      │ │   │ ╰───┴───────╯ │ │
│   │      │ ╰───┴───────────────╯ │
│ 1 │ data │                     0 │
│ 2 │ data │                     0 │
├───┼──────┼───────────────────────┤
│ # │  a   │           b           │
╰───┴──────┴───────────────────────╯
```

Maybe it will also solve the issue you @fdncred encountered.

close #14060
cc: @NotTheDr01ds
2024-11-19 15:31:28 -06:00
9cffbdb42a remove deprecated warnings (#14386)
# Description
While looking into nushell deprecated relative code, I found `str
contains` have some warnings, but it should be removed.
2024-11-19 07:52:58 -06:00
d69e131450 Rely on display_output hook for formatting values from evaluations (#14361)
# Description

I was reading through the documentation yesterday, when I stumbled upon
[this
section](https://www.nushell.sh/book/pipelines.html#behind-the-scenes)
explaining how command output is formatted using the `table` command. I
was surprised that this section didn't mention the `display_output`
hook, so I took a look in the code and was shocked to discovered that
the documentation was correct, and the `table` command _is_
automatically applied to printed pipelines.

This auto-tabling has two ramifications for the `display_output` hook:

1. The `table` command is called on the output of a pipeline after the
`display_output` has run, even if `display_output` contains the table
command. This means each pipeline output is roughly equivalent to the
following (using `ls` as an example):
    ```nushell
    ls | do $config.hooks.display_output | table
    ```
2. If `display_output` returns structured data, it will _still_ be
formatted through the table command.

This PR removes the auto-table when the `display_output` hook is set.
The auto-table made sense before `display_output` was introduced, but to
me, it now seems like unnecessary "automagic" which can be accomplished
using existing Nushell features.

This means that you can now pull back the curtain a bit, and replace
your `display_output` hook with an empty closure
(`$env.config.hooks.display_output = {||}`, setting it to null retains
the previous behavior) to see the values printed normally without the
table formatting. I think this is a good thing, and makes it easier to
understand Nushell fundamentals.

It is important to note that this PR does not change how `print` and
other commands (well, specifically only `watch`) print out values. They
continue to use `table` with no arguments, so changing your
config/`display_output` hook won't affect what `print`ing a value does.

Rel: [Discord
discussion](https://discord.com/channels/601130461678272522/615329862395101194/1307102690848931904)
(cc @dcarosone)

# User-Facing Changes

Pipelines are no longer automatically formatted using the `table`
command. Instead, the `display_output` hook is used to format pipeline
output. Most users should see no impact, as the default `display_output`
hook already uses the `table` command.

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting


Will update mentioned docs page to call out `display_output` hook.
2024-11-19 21:04:29 +08:00
6e84ba182e Bump quick-xml to 0.37.0 (#14354)
# Description
Bump `quick-xml` to `0.37.0`.

This came about rebasing `nushell` in Fedora, which now has `quick-xml`
0.36.

There is one breaking change in 0.33 as far as `nu-command` is
concerned, in that `Event::PI` is now a dedicated `BytesPI` type:


https://github.com/tafia/quick-xml/blob/master/Changelog.md#misc-changes-5

I've tested compiling and testing locally with `0.33.0`, `0.36.0` and
`0.37.0` - but let's future-proof by requiring `0.37.0`.


# User-Facing Changes
N/A

# Tests + Formatting
No additional tests required, existing tests pass

# After Submitting
N/A

Signed-off-by: Michel Lind <salimma@fedoraproject.org>
2024-11-18 18:26:31 -06:00
6773dfce8d add --default flag to input command (#14374)
# Description
Closes: #14248

# User-Facing Changes
Added a `--default` flag to input command, and it also added an extra
output to prompt:
```
>  let x = input -d 18 "input your age"
input your age (default: 18)
> $x
18
> let x = input -d 18

> $x
18
```

# Tests + Formatting
I don't think it's easy to add a test for it :-(
2024-11-18 17:14:12 -06:00
13ce9e4f64 update uutils crates (#14371)
# Description

This PR updates the uutils/coreutils crates to the latest version. I
hard-coded debug to false, a new uu_mv parameter. It may be interesting
to add that but I just wanted to get all the uu crates on the same
version.

I had to update the tests because --no-clobber works but doesn't say
anything when it's not clobbering and previously we were checking for an
error message.


# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-11-17 19:31:36 -06:00
f63f8cb154 Add utouch command from uutils/coreutils (#11817)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

Part of https://github.com/nushell/nushell/issues/11549

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

This PR adds a `utouch` command that uses the `touch` command from
https://github.com/uutils/coreutils. Eventually, `utouch` may be able to
replace `touch`.

The conflicts in Cargo.lock and Cargo.toml are because I'm using the
uutils/coreutils main rather than the latest release, since the changes
that expose `uu_touch`'s internal functionality aren't available in the
latest release.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

Users will have access to a new `utouch` command with the following
flags:
todo

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-11-17 18:03:21 -06:00
6e1118681d make command signature parsing more strict (#14309)
# User-Facing Changes

The parser now errors on more invalid command signatures:

```nushell
# expected parameter or flag
def foo [ bar: int: ] {}

# expected type
def foo [ bar: =  ] {}
def foo [ bar: ] {}

# expected default value
def foo [ bar = ] {}
```
2024-11-18 08:01:52 +08:00
e5cec8f4eb fix(group-by): re #14337 name collision prevention (#14360)
A more involved solution to the issue pointed out
[here](https://github.com/nushell/nushell/pull/14337#issuecomment-2480392373)

# Description

With `--to-table`
- cell-path groupers are used to create column names, similar to
`select`
- closure groupers result in columns named `closure_{i}` where `i` is
the index of argument, with regards to other closures i.e. first closure
grouper results in a column named `closure_0`

  Previously
  - `group-by foo {...} {...}` => `table<foo, group1, group2, items>`
  - `group-by {...} foo {...}` => `table<group0, foo, group2, items>`
  
  With this PR
- `group-by foo {...} {...}` => `table<foo, closure_0, closure_1,
items>`
- `group-by {...} foo {...}` => `table<closure_0, foo, closure_1,
items>`
- no grouper argument results in a `table<group, items>` as previously

On naming conflicts caused by cell-path groupers named `items` or
`closure_{i}`, an error is thrown, suggesting to use a closure in place
of a cell-path.

```nushell
❯ ls | rename items | group-by items --to-table 
Error:   × grouper arguments can't be named `items`
   ╭─[entry #3:1:29]
 1 │ ls | rename items | group-by items --to-table 
   ·                             ────────┬────────
   ·                                     ╰── contains `items`
   ╰────
  help: instead of a cell-path, try using a closure
```
And following the suggestion:
```nushell
❯ ls | rename items | group-by { get items } --to-table 
╭─#──┬──────closure_0──────┬───────────────────────────items────────────────────────────╮
│ 0  │ CITATION.cff        │ ╭─#─┬────items─────┬─type─┬─size──┬───modified───╮         │
│    │                     │ │ 0 │ CITATION.cff │ file │ 812 B │ 3 months ago │         │
│    │                     │ ╰─#─┴────items─────┴─type─┴─size──┴───modified───╯         │
│ 1  │ CODE_OF_CONDUCT.md  │ ╭─#─┬───────items────────┬─type─┬──size───┬───modified───╮ │
...
```
2024-11-17 17:25:53 -06:00
6c36bd822c Fix doc and code comment typos (#14366)
# User-Facing Changes

* Fixes `polars value-counts --column` help text typo
* Fixes `polars agg-groups` help text typo
2024-11-17 19:17:35 +01:00
029c586717 fix ansi bleed over on right prompt (#14357)
# Description

In certain situations, we had ansi bleed on the right prompt. This PR
fixes that by prefixing the right prompt with an ansi reset `\x1b[0m`.

This PR also adds some --log-level warn logging so we can see the ansi
escapes that form the prompts.

Closes https://github.com/nushell/nushell/issues/14268
2024-11-17 19:47:09 +08:00
ea6493c041 Seq char update will work on all char (#14261)
# Description - fixes #14174

This PR addresses a bug in the `seq char` command where the command's
behavior did not align with its help description, which stated that it
prints a sequence of ASCII characters. The initial implementation only
allowed alphabetic characters, leading to user confusion when
non-alphabetic characters (e.g., digits, punctuation) were rejected or
when unexpected behavior occurred for certain input ranges.

### Changes Made:
- **Updated the input validation**: Modified the `is_single_character`
function to accept any ASCII character instead of restricting to
alphabetic characters.
- **Enhanced error messages**: Clarified error messages to specify that
any single ASCII character is acceptable.
- **Expanded functionality**: Ensured that the command can now generate
sequences that include non-alphabetic ASCII characters.
- **Updated tests**: Added tests to cover new use cases involving
non-alphabetic characters and improved validation.

### Examples After Fix:
- `seq char '0' '9'` now outputs `['0', '1', '2', '3', '4', '5', '6',
'7', '8', '9']`
- `seq char ' ' '/'` outputs a list of characters from space to `/`
- `seq char 'A' 'z'` correctly includes alphabetic and non-alphabetic
characters between `A` and `z`

# User-Facing Changes
- Users can now input any single ASCII character for the `start` and
`end` parameters of `seq char`.
- The output will accurately include all characters within the specified
ASCII range, including digits and punctuation.

# Tests + Formatting
- Added new tests to ensure the `seq char` command supports sequences
including non-alphabetic ASCII characters.
2024-11-15 21:05:29 +01:00
455d32d9e5 Cut down unnecessary lint allows (#14335)
Trying to reduce lint allows either by checking if they are former false
positives or by fixing the underlying warning.

- **Remove dead `allow(dead_code)`**
- **Remove recursive dead code**
- **Remove dead code**
- **Move test only functions to test module**
  The unit tests that use them, themselves are somewhat sus in that they
mock the usage and not test specificly used methods of the
implementation, so there is a risk for divergence
- **Remove `clippy::uninit_vec` allow.**
  May have been a false positive, or the impl has changed somewhat.
We certainly want to look at the unsafe code here to vet for
correctness.
2024-11-15 19:24:39 +01:00
7bd801a167 Update rstest from 0.18 to 0.23 (the current version) (#14350) 2024-11-15 19:18:01 +01:00
b6e84879b6 add multiple grouper support to group-by (#14337)
- closes #14330 

Related:
- #2607 
- #14019
- #14316 

# Description
This PR changes `group-by` to support grouping by multiple `grouper`
arguments.

# Changes

- No grouper: no change in behavior 
- Single grouper
  - `--to-table=false`: no change in behavior
  - `--to-table=true`:
    - closure grouper: named group0
    - cell-path grouper: named after the cell-path
- Multiple groupers:
  - `--to-table=false`: nested groups
- `--to-table=true`: one column for each grouper argument, followed by
the `items` column
    - columns corresponding to cell-paths are named after them
- columns corresponding to closure groupers are named `group{i}` where
`i` is the index of the grouper argument

# Examples
```nushell
> [1 3 1 3 2 1 1] | group-by
╭───┬───────────╮
│   │ ╭───┬───╮ │
│ 1 │ │ 0 │ 1 │ │
│   │ │ 1 │ 1 │ │
│   │ │ 2 │ 1 │ │
│   │ │ 3 │ 1 │ │
│   │ ╰───┴───╯ │
│   │ ╭───┬───╮ │
│ 3 │ │ 0 │ 3 │ │
│   │ │ 1 │ 3 │ │
│   │ ╰───┴───╯ │
│   │ ╭───┬───╮ │
│ 2 │ │ 0 │ 2 │ │
│   │ ╰───┴───╯ │
╰───┴───────────╯

> [1 3 1 3 2 1 1] | group-by --to-table
╭─#─┬─group─┬───items───╮
│ 0 │ 1     │ ╭───┬───╮ │
│   │       │ │ 0 │ 1 │ │
│   │       │ │ 1 │ 1 │ │
│   │       │ │ 2 │ 1 │ │
│   │       │ │ 3 │ 1 │ │
│   │       │ ╰───┴───╯ │
│ 1 │ 3     │ ╭───┬───╮ │
│   │       │ │ 0 │ 3 │ │
│   │       │ │ 1 │ 3 │ │
│   │       │ ╰───┴───╯ │
│ 2 │ 2     │ ╭───┬───╮ │
│   │       │ │ 0 │ 2 │ │
│   │       │ ╰───┴───╯ │
╰─#─┴─group─┴───items───╯

> [1 3 1 3 2 1 1] | group-by { $in >= 2 }
╭───────┬───────────╮
│       │ ╭───┬───╮ │
│ false │ │ 0 │ 1 │ │
│       │ │ 1 │ 1 │ │
│       │ │ 2 │ 1 │ │
│       │ │ 3 │ 1 │ │
│       │ ╰───┴───╯ │
│       │ ╭───┬───╮ │
│ true  │ │ 0 │ 3 │ │
│       │ │ 1 │ 3 │ │
│       │ │ 2 │ 2 │ │
│       │ ╰───┴───╯ │
╰───────┴───────────╯

> [1 3 1 3 2 1 1] | group-by { $in >= 2 } --to-table
╭─#─┬─group0─┬───items───╮
│ 0 │ false  │ ╭───┬───╮ │
│   │        │ │ 0 │ 1 │ │
│   │        │ │ 1 │ 1 │ │
│   │        │ │ 2 │ 1 │ │
│   │        │ │ 3 │ 1 │ │
│   │        │ ╰───┴───╯ │
│ 1 │ true   │ ╭───┬───╮ │
│   │        │ │ 0 │ 3 │ │
│   │        │ │ 1 │ 3 │ │
│   │        │ │ 2 │ 2 │ │
│   │        │ ╰───┴───╯ │
╰─#─┴─group0─┴───items───╯
```

```nushell
let data = [
    [name, lang, year];
    [andres, rb, "2019"],
    [jt, rs, "2019"],
    [storm, rs, "2021"]
]

> $data
╭─#─┬──name──┬─lang─┬─year─╮
│ 0 │ andres │ rb   │ 2019 │
│ 1 │ jt     │ rs   │ 2019 │
│ 2 │ storm  │ rs   │ 2021 │
╰─#─┴──name──┴─lang─┴─year─╯
```

```nushell
> $data | group-by lang
╭────┬──────────────────────────────╮
│    │ ╭─#─┬──name──┬─lang─┬─year─╮ │
│ rb │ │ 0 │ andres │ rb   │ 2019 │ │
│    │ ╰─#─┴──name──┴─lang─┴─year─╯ │
│    │ ╭─#─┬─name──┬─lang─┬─year─╮  │
│ rs │ │ 0 │ jt    │ rs   │ 2019 │  │
│    │ │ 1 │ storm │ rs   │ 2021 │  │
│    │ ╰─#─┴─name──┴─lang─┴─year─╯  │
╰────┴──────────────────────────────╯
```

Group column is now named after the grouper, to allow multiple groupers.
```nushell
> $data | group-by lang --to-table  # column names changed!
╭─#─┬─lang─┬────────────items─────────────╮
│ 0 │ rb   │ ╭─#─┬──name──┬─lang─┬─year─╮ │
│   │      │ │ 0 │ andres │ rb   │ 2019 │ │
│   │      │ ╰─#─┴──name──┴─lang─┴─year─╯ │
│ 1 │ rs   │ ╭─#─┬─name──┬─lang─┬─year─╮  │
│   │      │ │ 0 │ jt    │ rs   │ 2019 │  │
│   │      │ │ 1 │ storm │ rs   │ 2021 │  │
│   │      │ ╰─#─┴─name──┴─lang─┴─year─╯  │
╰─#─┴─lang─┴────────────items─────────────╯
```

Grouping by multiple columns makes finer grained aggregations possible.
```nushell
> $data | group-by lang year --to-table
╭─#─┬─lang─┬─year─┬────────────items─────────────╮
│ 0 │ rb   │ 2019 │ ╭─#─┬──name──┬─lang─┬─year─╮ │
│   │      │      │ │ 0 │ andres │ rb   │ 2019 │ │
│   │      │      │ ╰─#─┴──name──┴─lang─┴─year─╯ │
│ 1 │ rs   │ 2019 │ ╭─#─┬─name─┬─lang─┬─year─╮   │
│   │      │      │ │ 0 │ jt   │ rs   │ 2019 │   │
│   │      │      │ ╰─#─┴─name─┴─lang─┴─year─╯   │
│ 2 │ rs   │ 2021 │ ╭─#─┬─name──┬─lang─┬─year─╮  │
│   │      │      │ │ 0 │ storm │ rs   │ 2021 │  │
│   │      │      │ ╰─#─┴─name──┴─lang─┴─year─╯  │
╰─#─┴─lang─┴─year─┴────────────items─────────────╯
```

Grouping by multiple columns, without `--to-table` returns a nested
structure.
This is equivalent to `$data | group-by year | split-by lang`, making
`split-by` obsolete.
```nushell
> $data | group-by lang year
╭────┬─────────────────────────────────────────╮
│    │ ╭──────┬──────────────────────────────╮ │
│ rb │ │      │ ╭─#─┬──name──┬─lang─┬─year─╮ │ │
│    │ │ 2019 │ │ 0 │ andres │ rb   │ 2019 │ │ │
│    │ │      │ ╰─#─┴──name──┴─lang─┴─year─╯ │ │
│    │ ╰──────┴──────────────────────────────╯ │
│    │ ╭──────┬─────────────────────────────╮  │
│ rs │ │      │ ╭─#─┬─name─┬─lang─┬─year─╮  │  │
│    │ │ 2019 │ │ 0 │ jt   │ rs   │ 2019 │  │  │
│    │ │      │ ╰─#─┴─name─┴─lang─┴─year─╯  │  │
│    │ │      │ ╭─#─┬─name──┬─lang─┬─year─╮ │  │
│    │ │ 2021 │ │ 0 │ storm │ rs   │ 2021 │ │  │
│    │ │      │ ╰─#─┴─name──┴─lang─┴─year─╯ │  │
│    │ ╰──────┴─────────────────────────────╯  │
╰────┴─────────────────────────────────────────╯
```

From #2607:
> Here's a couple more examples without much explanation. This one shows
adding two grouping keys. I'm always wanting to add more columns when
using group-by and it just-work™️ `gb.exe -f movies-2.csv -k 3,2 -s 7
--skip_header`
> 
> ```
>  k:3                   | k:2       | count | sum:7
> -----------------------+-----------+-------+--------------------
>  20th Century Fox      | Drama     | 1     | 117.09
>  20th Century Fox      | Romance   | 1     | 39.66
>  CBS                   | Comedy    | 1     | 77.09
>  Disney                | Animation | 4     | 1264.23
>  Disney                | Comedy    | 4     | 950.27
>  Fox                   | Comedy    | 5     | 661.85
>  Independent           | Comedy    | 7     | 399.07
>  Independent           | Drama     | 4     | 69.75
>  Independent           | Romance   | 7     | 1048.75
>  Independent           | romance   | 1     | 29.37
> ...
> ```

This example can be achieved like this:
```nushell
> open movies-2.csv
  | group-by "Lead Studio" Genre --to-table
  | insert count {get items | length}
  | insert sum { get items."Worldwide Gross" | math sum}
  | reject items
  | sort-by "Lead Studio" Genre
╭─#──┬──────Lead Studio──────┬───Genre───┬─count─┬───sum───╮
│ 0  │ 20th Century Fox      │ Drama     │     1 │  117.09 │
│ 1  │ 20th Century Fox      │ Romance   │     1 │   39.66 │
│ 2  │ CBS                   │ Comedy    │     1 │   77.09 │
│ 3  │ Disney                │ Animation │     4 │ 1264.23 │
│ 4  │ Disney                │ Comedy    │     4 │  950.27 │
│ 5  │ Fox                   │ Comedy    │     5 │  661.85 │
│ 6  │ Fox                   │ comedy    │     1 │   60.72 │
│ 7  │ Independent           │ Comedy    │     7 │  399.07 │
│ 8  │ Independent           │ Drama     │     4 │   69.75 │
│ 9  │ Independent           │ Romance   │     7 │ 1048.75 │
│ 10 │ Independent           │ romance   │     1 │   29.37 │
...
```
2024-11-15 06:40:49 -06:00
f7832c0e82 allow nuscripts to be run again on windows with assoc/ftype (#14318)
# Description

This PR tries to correct the problem of nushell scripts being made
executable on Windows systems. In order to do this, these steps need to
take place.
1. `assoc .nu=nuscript`
2. `ftype nuscript=C:\path\to\nu.exe '%1' %*`
3. modify the env var PATHEXT by appending `;.NU` at the end
 
Once those steps are done and this PR is landed, one should be able to
create a script such as this.
```nushell
❯ open im_exe.nu
def main [arg] {
  print $"Hello ($arg)!"
}
```
Then they should be able to do this to run the nushell script.
```nushell
❯ im_exe Nushell
Hello Nushell!
```

Under-the-hood, nushell is shelling out to cmd.exe in order to run the
nushell script.

# User-Facing Changes
closes #13020

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-11-15 06:39:42 -06:00
8c1ab7e0a3 Add proper config defaults for hooks (#14341)
# Release Notes Excerpt

* Hooks now default to an empty value of the proper type (e.g., `[]` or
`{}`) when not otherwise specified

# Description

```nushell
# Start with no config
nu -n
# Populate with defaults
$env.config = {}
$env.config.hooks
```

* Before: All hooks other than `display_output` were set to `null`.
Attempting to append a hook using `++=` would fail unless it had already
been assigned.
* After:
* `pre_prompt`, `pre_execution`, and `command_not_found` are set to
empty lists. This allows the user to simply append new hooks using
`++=`.
* `env_change` is set to an empty record. This allows the user to add
new hooks using `merge`, although a "helper" command would still be
useful (TODO: stdlib).

Also fixed a typo in an error message.

# User-Facing Changes

There shouldn't be any breaking changes since (before) there were no
guarantees of the hook's value/type. Previously, users would have to
check for `null` and `default` to an empty list before appending. Any
user-strategies for dealing with the problem should continue to work
after this change.

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

Note that, for reasons I cannot ascertain, this PR appears to have
*fixed* the `command_not_found_error_recognizes_non_executable_file`
test that was previously broken by #12953. That PR essentially rewrote
the test to match the new behavior, but it no longer tested what it was
intended to test.

Now, the test is working again as designed (and as it works in the
REPL).

# After Submitting

This will be covered in the Configuration update for #14249. This PR
will simplify several examples in the doc.
2024-11-14 20:27:26 -08:00
9d0f69ac50 Add support for converting polars decimal values to nushell values (#14343)
Adds support for converting from polars decimal type to nushell values.

This fix works by first converting a polars decimal series to an f64
series, then converting to Value::Float

Co-authored-by: Jack Wright <jack.wright@nike.com>
2024-11-15 12:10:38 +08:00
215ca6c5ca Remove the NU_DISABLE_IR option (#14293)
# Description

Removes the `NU_DISABLE_IR` option and some code related to evaluating
blocks with the AST
evaluator.

Does not entirely remove the AST evaluator yet. We still have some
dependencies on expression
evaluation in a few minor places which will take a little bit of effort
to fix.

Also changes `debug profile` to always include instructions, because the
output is a little
confusing otherwise, and removes the different options for
instructions/exprs.

# User-Facing Changes

- `NU_DISABLE_IR` no longer has any effect, and is removed. There is no
way to use the AST
  evaluator.
- `debug profile` no longer has `--exprs`, `--instructions` options.
- `debug profile` lists `pc` and `instruction` columns by default now.

# Tests + Formatting

Eval tests fixed to only use IR.

# After Submitting

- [ ] release notes
- [ ] finish removing AST evaluator, come up with solutions for the
expression evaluation.
2024-11-15 12:09:25 +08:00
a04c90e22d make ls return "Permission denied" for CWD instead of empty results (#14310)
Fixes #14265

# User-Facing Changes

`ls` without a path argument now errors when the current working
directory is unreadable due to missing permissions:

```diff
mkdir foo
chmod 100 foo
cd foo
ls | to nuon
-[]
+Error:   × Permission denied
```
2024-11-15 12:09:02 +08:00
a84d410f11 Fix inconsistency in ls sort-order (#13875)
Fixes #13267 

As we can see from the bisect done in the comments.
Bisected to https://github.com/nushell/nushell/pull/12625 /
460a1c8f87

We can see that this update brought the use of `read_dir` and for it, it
is mentioned in the [rust
docs](https://doc.rust-lang.org/std/fs/fn.read_dir.html#platform-specific-behavior)
that it does **not** provide any specific order of files.
As was the advice there, I went and applied a manual `sort` to the
entries and tested it manually on my local machine.

If required I could probably try and add tests for the order
consistency, would need some time to find my way around them, so I'm
sending the PR first.
2024-11-15 07:39:41 +08:00
636bae2466 Bump tempfile from 3.13.0 to 3.14.0 (#14326) 2024-11-14 09:32:55 +00:00
739a7ea730 Bump mockito from 1.5.0 to 1.6.1 (#14336) 2024-11-14 09:20:17 +00:00
3893fbb0b1 skip test_iteration_errors if /root is missing (#14299)
# Description

`test_iteration_errors` no longer requires `/root` to exist:

```
failures:

---- test::test_iteration_errors stdout ----
thread 'test::test_iteration_errors' panicked at crates/nu-glob/src/li
b.rs:1151:13:
assertion failed: next.is_some()
```

`/root` is an optional home directory in the [File Hierarchy
Standard][1].

I encountered this while running the tests in a `guix shell` container,
which doesn't include a root user.

[1]: https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch03s14.html

# User-Facing Changes

None
2024-11-14 10:13:04 +01:00
948205c8e6 Bump serial_test from 3.1.1 to 3.2.0 (#14325) 2024-11-14 09:09:48 +00:00
6278afde8d Bump crate-ci/typos from 1.27.0 to 1.27.3 (#14321) 2024-11-14 09:08:19 +00:00
f0cb2dafbb Allow duration to be added to date (#14295)
# Description

Fixes #14294 - Turned out to be a whole lot easier than I expected, but
please double-check me on this, since it's an area I haven't been in
before.

# User-Facing Changes

Allow date to be added to a duration type.

# Tests + Formatting

Tests added:

* Duration + Date is allowed
* Duration - Date is not allowed
2024-11-14 10:07:37 +01:00
a3c145432e Tests: add a test to make sure that function can't use mutable variable (#14314)
@sholderbach suggested that we need to have a test for a function can't
use mutable variable.

https://github.com/nushell/nushell/pull/14311#issuecomment-2470035194

So this pr is going to add a case for it.

---------

Co-authored-by: Stefan Holderbach <sholderbach@users.noreply.github.com>
2024-11-14 10:05:33 +01:00
e6f55da080 Bump to dev version 0.100.1 (#14328) 2024-11-14 10:04:39 +01:00
30f98f7e64 Downgrade softprops/action-gh-release to 2.0.5 (#14327)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
Downgrade `softprops/action-gh-release` to 2.0.5 to fix the release per
asset mess.
It works in https://github.com/nushell/nushell/actions/runs/11809766842
with the release draft:

https://github.com/nushell/nushell/releases/tag/untagged-c055298a78ddb780bd01,
more detail could be found here:
https://github.com/softprops/action-gh-release/issues/445

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-11-13 11:28:47 +08:00
c9409a2edb Bump version to 0.100.0 (#14312)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

Bump version to `0.100.0`

# User-Facing Changes

The new release `v0.100.0` is coming...
2024-11-12 22:22:38 +02:00
b857064d65 Pin reedline to 0.37.0 release (#14317) 2024-11-12 20:34:46 +01:00
a541382776 Fix binary example and add one for text uploads (#14307)
# Description

In #14291, I misunderstood the use-case for `into binary` with `http
post`. Thanks again to @weirdan for steering me straight on that. This
reverts the example that I changed and adds a new one for uploading text
files.

# User-Facing Changes

Doc-only

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting

N/A
2024-11-11 12:49:49 -06:00
07ad24ab97 Fix ignored into datetime test (#14302)
# Description

Fixes test which was ignored in #14297.  Also fixes related example.

Tests now use local timezone to match actual result.

More discussion in #14266

# User-Facing Changes

Tests-only

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting

N/A
2024-11-11 06:01:39 -06:00
55db643048 ignore without_timezone test for now (#14297)
# Description

Since the human-date-parser was switched to use the users local
timezone, this test may not be needed anymore. I've just ignored it for
now and put a comment about why it's being ignored.

There are more discussions on this topic here
https://github.com/nushell/nushell/pull/14266

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-11-10 07:35:18 -06:00
8f9b198d48 upgrade bracoxide to v0.1.4 (fixes #14290) (#14296)
I'm  sorry I'm not following the PR template but this is a quick fix.

Fixes #14290
2024-11-10 07:00:42 -06:00
6c7129cc0c Fix multipart/form-data post example (#14291)
# Description

Thanks to @weirdan [in
Discord](https://discord.com/channels/601130461678272522/614593951969574961/1304508148207583345)
for pointing out that correct syntax for `http post --content-type
multipart/form-data`.

The existing example was incomplete, so I've updated it.

# User-Facing Changes

Doc-only

# Tests + Formatting

`toolkit test` currently seems to be broken, so relying on CI

# After Submitting

N/A
2024-11-09 18:09:17 -06:00
919d55f3fc Remove unneeded clones in select (#14283)
# Description

This PR removes some unneeded `clone()` calls in the implementation of
`select`.

# User-Facing Changes

There are no user-facing changes.
2024-11-08 06:37:38 +00:00
bdf63420d1 update reedline to the latest commit (#14281)
# Description

This PR updates reedline to the latest commit. 7a1b344a.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-11-07 07:15:57 -06:00
b7af715f6b IR: Don't generate instructions for def and export def. (#14114)
# Description
Fixes: #14110
Fixes: #14087

I think it's ok to not generating instruction to `def` and `export def`
call. Because they just return `PipelineData::Empty` without doing
anything.

If nushell generates instructions for `def` and `export def`, nushell
will try to capture variables for these block. It's not the time to do
this.

# User-Facing Changes
```
nu -c "
def bar [] {
    let x = 1
    ($x | foo)
}
def foo [] {
    foo
}
" 
```
Will no longer raise error.

# Tests + Formatting
Added 4 tests
2024-11-06 21:35:00 -08:00
b6eda33438 allow != for polars (#14263)
# Description

This PR fixes a problem where not equal in polars wasn't working with
strings.

## Before
```nushell
let a = ls | polars into-df
$a.type != "dir"
Error: nu:🐚:type_mismatch

  × Type mismatch during operation.
   ╭─[entry #16:1:1]
 1 │ $a.type != "dir"
   · ─┬      ─┬ ──┬──
   ·  │       │   ╰── string
   ·  │       ╰── type mismatch for operator
   ·  ╰── NuDataFrame
   ╰────
```

## After
```nushell
let a = ls | polars into-df
$a.type != "dir"
╭──#──┬─type──╮
│ 0   │ false │
│ 1   │ false │
│ 2   │ false │
...
```

/cc @ayax79 to make sure I did this right.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-11-06 15:58:22 -08:00
ab641d9f18 Fix the order of preference for VISUAL and EDITOR (#14275)
# Description

The order in which Nushell consulted `$env.EDITOR` and `$env.VISUAL` was
wrong. Most other programs check `$env.VISUAL` first and then fall back
to `$env.EDITOR` (for historic reasons).

References:

*
https://wiki.archlinux.org/title/Environment_variables#Default_programs
*
https://help.ubuntu.com/community/EnvironmentVariables#Preferred_application_variables
 * https://unix.stackexchange.com/a/4861
 * https://git-scm.com/docs/git-var

# User-Facing Changes

Users will now be able to use those preferences variables the same way
they are used in other programs.

# Tests + Formatting

That part wasn't tested before, and I don't think it's necessary to test
it now.

# After Submitting

PR to the docs repo is here: nushell/nushell.github.io#1621
2024-11-06 17:01:57 -06:00
c7e128eed1 add table params support to url join and url build-query (#14239)
Add `table<key, value>` support to `url join` for the `params` field,
and as input to `url build-query` #14162

# Description
```nushell
{
    "scheme": "http",
    "username": "usr",
    "password": "pwd",
    "host": "localhost",
    "params": [
        ["key", "value"];
        ["par_1", "aaa"],
        ["par_2", "bbb"],
        ["par_1", "ccc"],
        ["par_2", "ddd"],
    ],
    "port": "1234",
} | url join
```
```
http://usr:pwd@localhost:1234?par_1=aaa&par_2=bbb&par_1=ccc&par_2=ddd
```

---

```nushell
[
    ["key", "value"];
    ["par_1", "aaa"],
    ["par_2", "bbb"],
    ["par_1", "ccc"],
    ["par_2", "ddd"],
] | url build-query
```
```
par_1=aaa&par_2=bbb&par_1=ccc&par_2=ddd
```

# User-Facing Changes

## `url build-query`

- can no longer accept one row table input as if it were a record

---------

Co-authored-by: Darren Schroeder <343840+fdncred@users.noreply.github.com>
2024-11-06 08:09:40 -06:00
cc0259bbed don't include import path in args to aliased external commands (#14231)
Fixes #13776

# User-Facing Changes

Arguments to aliased externals no longer include nested import paths:

```diff
module foo { export alias bar = ^echo }
use foo
foo bar baz
-bar baz
+baz
```
2024-11-06 07:40:29 -06:00
23fba6d2ea correctly parse table literals as lists (#14226)
# User-Facing Changes

Table literal arguments to list parameters are now correctly parsed:

```diff
def a [l: list<any>] { $l | to nuon }; a [[a]; [2]]
-[[a]]
+[[a]; [2]]
```
2024-11-06 07:36:56 -06:00
3182adb6a0 Url split query (#14211)
Addresses the following points from #14162

> - There is no built-in counterpart to url build-query for splitting a
query string

There is `from url`, which, due to naming, is a little hard to discover
and suffers from the following point

> - url parse can create records with duplicate keys
> - url parse's params should either:
>   - ~group the same keys into a list.~
> - instead of a record, be a key-value table. (table<key: string,
value: string>)

# Description

## `url split-query`

Counterpart to `url build-query`, splits a url encoded query string to
key value pairs, represented as `table<key: string, value: string>`

```
> "a=one&a=two&b=three" | url split-query
╭───┬─────┬───────╮
│ # │ key │ value │
├───┼─────┼───────┤
│ 0 │ a   │ one   │
│ 1 │ a   │ two   │
│ 2 │ b   │ three │
╰───┴─────┴───────╯
```

## `url parse`

The output's `param` field is now a table as well, mirroring the new
`url split-query`

```
> 'http://localhost?a=one&a=two&b=three' | url parse
╭──────────┬─────────────────────╮
│ scheme   │ http                │
│ username │                     │
│ password │                     │
│ host     │ localhost           │
│ port     │                     │
│ path     │ /                   │
│ query    │ a=one&a=two&b=three │
│ fragment │                     │
│          │ ╭───┬─────┬───────╮ │
│ params   │ │ # │ key │ value │ │
│          │ ├───┼─────┼───────┤ │
│          │ │ 0 │ a   │ one   │ │
│          │ │ 1 │ a   │ two   │ │
│          │ │ 2 │ b   │ three │ │
│          │ ╰───┴─────┴───────╯ │
╰──────────┴─────────────────────╯
```

# User-Facing Changes

- `url parse`'s output has the mentioned change, which is backwards
incompatible.
2024-11-06 07:35:37 -06:00
d52ec65f18 update human-date-parser conversion to use local timezone (#14266)
# Description

This PR tries to fix https://github.com/nushell/nushell/issues/14195 by
setting the local time and timezone after conversion without changing
the time.

### Before
```nushell
❯ 'in 10 minutes' | into datetime
Tue, 5 Nov 2024 12:59:58 -0600 (in 9 minutes)
❯ 'yesterday' | into datetime
Sun, 3 Nov 2024 18:00:00 -0600 (2 days ago)
❯ 'tomorrow' | into datetime
Tue, 5 Nov 2024 18:00:00 -0600 (in 5 hours)
❯ 'today' | into datetime
Mon, 4 Nov 2024 18:00:00 -0600 (18 hours ago)
```

### After (these are correct)
```nushell
❯ 'in 10 minutes' | into datetime
Tue, 5 Nov 2024 12:58:44 -0600 (in 9 minutes)
❯ 'yesterday' | into datetime
Mon, 4 Nov 2024 12:49:04 -0600 (a day ago)
❯ 'tomorrow' | into datetime
Wed, 6 Nov 2024 12:49:20 -0600 (in a day)
❯ 'today' | into datetime
Tue, 5 Nov 2024 12:52:06 -0600 (now)
```

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-11-06 07:14:00 -06:00
b968376be9 Bump crate-ci/typos from 1.26.8 to 1.27.0 (#14272)
Bumps [crate-ci/typos](https://github.com/crate-ci/typos) from 1.26.8 to
1.27.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/crate-ci/typos/releases">crate-ci/typos's
releases</a>.</em></p>
<blockquote>
<h2>v1.27.0</h2>
<h2>[1.27.0] - 2024-11-01</h2>
<h3>Features</h3>
<ul>
<li>Updated the dictionary with the <a
href="https://redirect.github.com/crate-ci/typos/issues/1106">October
2024</a> changes</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/crate-ci/typos/blob/master/CHANGELOG.md">crate-ci/typos's
changelog</a>.</em></p>
<blockquote>
<h2>[1.27.0] - 2024-11-01</h2>
<h3>Features</h3>
<ul>
<li>Updated the dictionary with the <a
href="https://redirect.github.com/crate-ci/typos/issues/1106">October
2024</a> changes</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d01f29c66d"><code>d01f29c</code></a>
chore: Release</li>
<li><a
href="52e950bb13"><code>52e950b</code></a>
chore: Release</li>
<li><a
href="19cfc03ea4"><code>19cfc03</code></a>
docs: Update changelog</li>
<li><a
href="f80b1564bd"><code>f80b156</code></a>
Merge pull request <a
href="https://redirect.github.com/crate-ci/typos/issues/1140">#1140</a>
from epage/oct</li>
<li><a
href="6b5c8079a9"><code>6b5c807</code></a>
feat(dict): Oct updates</li>
<li><a
href="d64f202a88"><code>d64f202</code></a>
chore(deps): Update compatible (<a
href="https://redirect.github.com/crate-ci/typos/issues/1137">#1137</a>)</li>
<li><a
href="e903c46287"><code>e903c46</code></a>
Merge pull request <a
href="https://redirect.github.com/crate-ci/typos/issues/1136">#1136</a>
from PigeonF/PigeonF/push-mlqnlvmswwmp</li>
<li><a
href="b994765ef9"><code>b994765</code></a>
chore: Fix typo &quot;potemtial&quot; -&gt; &quot;potential&quot;</li>
<li>See full diff in <a
href="https://github.com/crate-ci/typos/compare/v1.26.8...v1.27.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=crate-ci/typos&package-manager=github_actions&previous-version=1.26.8&new-version=1.27.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Wind <WindSoilder@outlook.com>
2024-11-06 10:48:34 +08:00
90bd8c82b7 Bump notify-debouncer-full from 0.3.1 to 0.3.2 (#14271)
Bumps [notify-debouncer-full](https://github.com/notify-rs/notify) from
0.3.1 to 0.3.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/notify-rs/notify/releases">notify-debouncer-full's
releases</a>.</em></p>
<blockquote>
<h2>debouncer-full-0.3.2</h2>
<h2>What's Changed</h2>
<ul>
<li>FIX: ordering of debounced events could lead to a panic with Rust
1.81.0 and above by <a
href="https://github.com/dfaust"><code>@​dfaust</code></a> in <a
href="https://redirect.github.com/notify-rs/notify/pull/643">notify-rs/notify#643</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/notify-rs/notify/compare/debouncer-full-0.3.1...debouncer-full-0.3.2">https://github.com/notify-rs/notify/compare/debouncer-full-0.3.1...debouncer-full-0.3.2</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/notify-rs/notify/blob/main/CHANGELOG.md">notify-debouncer-full's
changelog</a>.</em></p>
<blockquote>
<h2>debouncer-full 0.3.2 (2024-09-29)</h2>
<ul>
<li>FIX: ordering of debounced events could lead to a panic with Rust
1.81.0 and above <a
href="https://redirect.github.com/notify-rs/notify/issues/636">#636</a></li>
</ul>
<p><a
href="https://redirect.github.com/notify-rs/notify/issues/636">#636</a>:
<a
href="https://redirect.github.com/notify-rs/notify/issues/636">notify-rs/notify#636</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="2bef540ff4"><code>2bef540</code></a>
Merge pull request <a
href="https://redirect.github.com/notify-rs/notify/issues/643">#643</a>
from dfaust/release-debouncer-full-0.3.2</li>
<li><a
href="ef8bc72b4b"><code>ef8bc72</code></a>
Fix debouncer-full version number and prepare release</li>
<li><a
href="3606af8005"><code>3606af8</code></a>
Fix compatibility with MSRV 1.60</li>
<li><a
href="f5c47023a6"><code>f5c4702</code></a>
Improve <code>sort_events</code> performance</li>
<li><a
href="8f809f7197"><code>8f809f7</code></a>
Fix ordering of debounced events</li>
<li>See full diff in <a
href="https://github.com/notify-rs/notify/compare/debouncer-full-0.3.1...debouncer-full-0.3.2">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=notify-debouncer-full&package-manager=cargo&previous-version=0.3.1&new-version=0.3.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-06 10:31:30 +08:00
0955e8c5b6 Bump scraper from 0.20.0 to 0.21.0 (#14270)
Bumps [scraper](https://github.com/causal-agent/scraper) from 0.20.0 to
0.21.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/causal-agent/scraper/releases">scraper's
releases</a>.</em></p>
<blockquote>
<h2>0.21.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Bump indexmap from 2.3.0 to 2.4.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/rust-scraper/scraper/pull/197">rust-scraper/scraper#197</a></li>
<li>Bump ego-tree from 0.6.2 to 0.7.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/rust-scraper/scraper/pull/198">rust-scraper/scraper#198</a></li>
<li>migrate once_cell::unsync::OnceCell to std::cell::OnceCell + drop
dep… by <a
href="https://github.com/LoZack19"><code>@​LoZack19</code></a> in <a
href="https://redirect.github.com/rust-scraper/scraper/pull/199">rust-scraper/scraper#199</a></li>
<li>Introduce workspaces by <a
href="https://github.com/LoZack19"><code>@​LoZack19</code></a> in <a
href="https://redirect.github.com/rust-scraper/scraper/pull/201">rust-scraper/scraper#201</a></li>
<li>Now that ego-tree's Traverse is a fused iterator, so are our Select
and Text by <a
href="https://github.com/adamreichold"><code>@​adamreichold</code></a>
in <a
href="https://redirect.github.com/rust-scraper/scraper/pull/202">rust-scraper/scraper#202</a></li>
<li>Bump indexmap from 2.4.0 to 2.5.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/rust-scraper/scraper/pull/204">rust-scraper/scraper#204</a></li>
<li>Bump ego-tree from 0.8.0 to 0.9.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/rust-scraper/scraper/pull/205">rust-scraper/scraper#205</a></li>
<li>Bump indexmap from 2.5.0 to 2.6.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/rust-scraper/scraper/pull/211">rust-scraper/scraper#211</a></li>
<li>Bump selectors, cssparser and html5ever by <a
href="https://github.com/adamreichold"><code>@​adamreichold</code></a>
in <a
href="https://redirect.github.com/rust-scraper/scraper/pull/214">rust-scraper/scraper#214</a></li>
<li>Handle missing Token::Delim variant when rendering errors by <a
href="https://github.com/adamreichold"><code>@​adamreichold</code></a>
in <a
href="https://redirect.github.com/rust-scraper/scraper/pull/213">rust-scraper/scraper#213</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/LoZack19"><code>@​LoZack19</code></a>
made their first contribution in <a
href="https://redirect.github.com/rust-scraper/scraper/pull/199">rust-scraper/scraper#199</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/rust-scraper/scraper/compare/v0.20.0...v0.21.0">https://github.com/rust-scraper/scraper/compare/v0.20.0...v0.21.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="93afdd96be"><code>93afdd9</code></a>
Version 0.21.0</li>
<li><a
href="9843bc8efe"><code>9843bc8</code></a>
Merge pull request <a
href="https://redirect.github.com/causal-agent/scraper/issues/213">#213</a>
from rust-scraper/fix-issue221</li>
<li><a
href="2ede12e4af"><code>2ede12e</code></a>
Merge pull request <a
href="https://redirect.github.com/causal-agent/scraper/issues/214">#214</a>
from rust-scraper/bump-selectors</li>
<li><a
href="fddd90ed14"><code>fddd90e</code></a>
Bump html5ever to its current stable version and adjust our usage
accordingly</li>
<li><a
href="7d422d8f82"><code>7d422d8</code></a>
Bump selectors and cssparser to their current stable versions and adjust
our ...</li>
<li><a
href="53ac848a12"><code>53ac848</code></a>
Handle missing Token::Delim variant when rendering errors</li>
<li><a
href="e0d4ea7a33"><code>e0d4ea7</code></a>
Bump indexmap from 2.5.0 to 2.6.0</li>
<li><a
href="c3735b29dc"><code>c3735b2</code></a>
Merge pull request <a
href="https://redirect.github.com/causal-agent/scraper/issues/205">#205</a>
from rust-scraper/dependabot/cargo/ego-tree-0.9.0</li>
<li><a
href="faca0a9644"><code>faca0a9</code></a>
Merge pull request <a
href="https://redirect.github.com/causal-agent/scraper/issues/204">#204</a>
from rust-scraper/dependabot/cargo/indexmap-2.5.0</li>
<li><a
href="b945d5af6c"><code>b945d5a</code></a>
Bump ego-tree from 0.8.0 to 0.9.0</li>
<li>Additional commits viewable in <a
href="https://github.com/causal-agent/scraper/compare/v0.20.0...v0.21.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=scraper&package-manager=cargo&previous-version=0.20.0&new-version=0.21.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-06 10:31:21 +08:00
ef55367224 Bump softprops/action-gh-release from 2.0.8 to 2.0.9 (#14273)
Bumps
[softprops/action-gh-release](https://github.com/softprops/action-gh-release)
from 2.0.8 to 2.0.9.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/softprops/action-gh-release/releases">softprops/action-gh-release's
releases</a>.</em></p>
<blockquote>
<h2>v2.0.9</h2>
<!-- raw HTML omitted -->
<h2>What's Changed</h2>
<ul>
<li>maintenance release with updated dependencies</li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/kbakdev"><code>@​kbakdev</code></a> made
their first contribution in <a
href="https://redirect.github.com/softprops/action-gh-release/pull/521">softprops/action-gh-release#521</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/softprops/action-gh-release/compare/v2...v2.0.9">https://github.com/softprops/action-gh-release/compare/v2...v2.0.9</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md">softprops/action-gh-release's
changelog</a>.</em></p>
<blockquote>
<h2>2.0.9</h2>
<ul>
<li>maintenance release with updated dependencies</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e7a8f85e1c"><code>e7a8f85</code></a>
chore: release 2.0.9</li>
<li><a
href="04afa1392e"><code>04afa13</code></a>
chore(deps): bump actions/setup-node from 4.0.4 to 4.1.0 (<a
href="https://redirect.github.com/softprops/action-gh-release/issues/535">#535</a>)</li>
<li><a
href="894468a03c"><code>894468a</code></a>
chore(deps): bump actions/checkout from 4.2.1 to 4.2.2 (<a
href="https://redirect.github.com/softprops/action-gh-release/issues/534">#534</a>)</li>
<li><a
href="3bd23aa9ec"><code>3bd23aa</code></a>
chore(deps): bump <code>@​types/node</code> from 22.7.5 to 22.8.2 (<a
href="https://redirect.github.com/softprops/action-gh-release/issues/533">#533</a>)</li>
<li><a
href="21eb2f9554"><code>21eb2f9</code></a>
chore(deps): bump <code>@​types/jest</code> from 29.5.13 to 29.5.14 (<a
href="https://redirect.github.com/softprops/action-gh-release/issues/532">#532</a>)</li>
<li><a
href="cd8b57e572"><code>cd8b57e</code></a>
remove unused imports (<a
href="https://redirect.github.com/softprops/action-gh-release/issues/521">#521</a>)</li>
<li><a
href="820a5adc43"><code>820a5ad</code></a>
chore(deps): bump actions/checkout from 4.2.0 to 4.2.1 (<a
href="https://redirect.github.com/softprops/action-gh-release/issues/522">#522</a>)</li>
<li><a
href="9d04f90cd8"><code>9d04f90</code></a>
chore(deps): bump <code>@​octokit/plugin-throttling</code> from 9.3.1 to
9.3.2 (<a
href="https://redirect.github.com/softprops/action-gh-release/issues/523">#523</a>)</li>
<li><a
href="aaf1d5f6d5"><code>aaf1d5f</code></a>
chore(deps): bump <code>@​actions/core</code> from 1.10.1 to 1.11.1 (<a
href="https://redirect.github.com/softprops/action-gh-release/issues/524">#524</a>)</li>
<li><a
href="7d33a7ecc3"><code>7d33a7e</code></a>
chore(deps): bump <code>@​types/node</code> from 22.5.5 to 22.7.5 (<a
href="https://redirect.github.com/softprops/action-gh-release/issues/525">#525</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/softprops/action-gh-release/compare/v2.0.8...v2.0.9">compare
view</a></li>
</ul>
</details>
<br />

<details>
<summary>Most Recent Ignore Conditions Applied to This Pull
Request</summary>

| Dependency Name | Ignore Conditions |
| --- | --- |
| softprops/action-gh-release | [< 0.2, > 0.1.13] |
</details>


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=softprops/action-gh-release&package-manager=github_actions&previous-version=2.0.8&new-version=2.0.9)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-06 10:30:55 +08:00
a60f454154 No longer autoload deprecated-dirs (#14242)
# Description

Follow-up to #13842. In that commit, using one of the `dirs`/`shells`
aliases would notify the user that it would no longer be autoloaded in
future releases. This is the removal stage.

Side-benefit: Additional 1ms+ load time improvement

# User-Facing Changes

Breaking-change - `dirs` aliases are no longer autoloaded.

Users can either choose to continue using the aliases by adding the
following to the startup:

```nu
use std/dirs shells-aliases *
```

Alternatively, users can use the `dirs` subcommands (rather than the
aliases) with:

```nu
use std/dirs
```
2024-11-05 21:53:41 +01:00
7a7df3e635 Switch to unicase's to_folded_case (#14255)
# Description
Switch `to_folded_case` to a proper case fold instead of
`str::to_lowercase` now that unicase exposes its `to_folded_case`
method.

Rel: #10884, https://github.com/seanmonstar/unicase/issues/61

# User-Facing Changes

Case insensitive sorts now do proper case folding.

Old behavior:

```nushell
[dreißig DREISSIG] | sort -i
# => ╭───┬──────────╮
# => │ 0 │ DREISSIG │
# => │ 1 │ dreißig  │
# => ╰───┴──────────╯
```

New behavior:

```nushell
[dreißig DREISSIG] | sort -i
# => ╭───┬──────────╮
# => │ 0 │ dreißig  │
# => │ 1 │ DREISSIG │
# => ╰───┴──────────╯
```
2024-11-05 09:39:08 +01:00
62198a29c2 Make to text line endings consistent for list (streams) (#14166)
# Description
Fixes #14151 where `to text` treats list streams and lists values
differently.

# User-Facing Changes
New line is always added after items in a list or record except for the
last item if the `--no-newline` flag is provided.
2024-11-05 09:33:54 +01:00
e87a35104a Remove as_i64 and as_f64 (#14258)
# Description
Turns out there are duplicate conversion functions: `as_i64` and
`as_f64`. In most cases, these can be replaced with `as_int` and
`as_float`, respectively.
2024-11-05 09:28:56 +01:00
1e051e573d fix $env.FILE_PWD and $env.CURRENT_FILE inside use (#14101)
# Description
Fixes: https://github.com/nushell/nushell/issues/13425

It's just a follow up to #13958.

User input can be a directory, in this case, we need to use the return
value of `find_in_dirs_env` carefully, so in case, I renamed
maybe_file_path to maybe_file_path_or_dir to emphasize it.


# User-Facing Changes
`$env.FILE_PWD` and `$env.CURRENT_FILE` will be more reliable to use.

# Tests + Formatting
Added 2 tests
2024-11-05 14:12:01 +08:00
e172a621f3 Consolidate uses of test-case to rstest (#14250)
With #14083 a dependency on `test-case` was introduced, we already
depend on the more exp(a/e)nsive `rstest` for our macro-based test case
generation (with fixtures on top)

To save on some compilation for proc macros unify to `rstest`
2024-11-04 19:07:59 +01:00
9f09930834 Div, mod, and floor div overhaul (#14157)
# Description
Dividing two ints can currently return either an int or a float. Not
having a single return type for an operation between two types seems
problematic. Additionally, the type signature for division says that
dividing two ints returns only an int which does not match the current
implementation (it can also return a float). This PR changes division
between almost all types to return a float (except for `filesize /
number` or `duration / number`, since there are no float representations
for these types).

Currently, floor division between certain types is not implemented even
though the type signature allows it. Also, the current implementation of
floor division uses a combination of clamping and flooring rather than
simply performing floor division which this PR fixes. Additionally, the
signature was changed so that `int // float`, `float // int`, and `float
// float` now return float instead of int. This matches the automatic
float promotion in the rest of the operators (as well as how Python does
floor division which I think is the original inspiration).

Since regular division has always returned fractional values (and now
returns a float to reflect that), `mod` is now defined in terms of floor
division. That is, `D // d = q`, `D mod d = r`, and `D = d * q + r `.
This is just like the `%` operator in Python, which is also based off
floor division (at least for ints and floats). Additionally,
implementations missing from `mod`'s current type signature have been
added (`duration mod int` and `duration mod float`).

This PR also overhauls the overflow checking and errors for div, mod,
and floor div. If an operation overflows, it will now cause an error.

# User-Facing Changes
- Div now returns a float in most cases.
- Floor division now actually does floor division.
- Floor division now does automatic float promotion, returning a float
in more instances.
- Floor division now actually allows division with filesize and
durations as its type signature claimed.
- Mod is now defined and implemented in terms of floor division rather
than truncating division.
- Mod now actually allows filesize and durations as its type signature
claimed.
- Div, mod, and floor div now all have proper overflow checks.

## Examples

When the divisor and the dividend have the same sign, the quotient and
remainder will be the same as before. (Except that this PR will give
more accurate results, since it does not do an intermediate float
conversion). If the signs of the divisor and dividend are different,
then the results will be different, or rather actually correct.

Before:

```nu
let q = 8 // -3 # -3
let r = 8 mod -3 # 2
8 == $q * -3 + $r # false
```

After:

```nu
let q = 8 // -3 # -3
let r = 8 mod -3 # -1
8 == $q * -3 + $r # true
```


Before:

```nu
let q = -8 // 3 # -3
let r = -8 mod 3 # -2
-8 == $q * 3 + $r # false
```

After:

```nu
let q = -8 // 3 # -3
let r = -8 mod 3 # 1
-8 == $q * 3 + $r # true
```

# Tests + Formatting
Added a few tests.

# After Submitting
Probably update the docs.
2024-11-04 18:03:48 +01:00
20c2de9eed Empty rest args match should be an empty list (#14246)
Fixes #14145 

# User-Facing Changes
An empty rest match would be `null` previously. Now it will be an empty
list.
This is a breaking change for any scripts relying on the old behavior.

Example script:
```nu
match [1] {
  [_ ..$rest] => {
    match $rest {
      null => { "old" }
      [] => { "new" }
    }
  } 
}
```
This expression would evaluate to "old" on current nu versions and "new"
with this patch.
2024-11-04 18:03:26 +01:00
22ca5a6b8d Add tests to test the --max-age arg in http commands (#14245)
- fixes #14241

Signed-off-by: Alex Johnson <alex.kattathra.johnson@gmail.com>
2024-11-04 05:41:44 -06:00
8b19399b13 support binary input in length (#14224)
Closes #13874

# User-Facing Changes

`length` now supports binary input:

```nushell
> random binary 1kb | length
1000
```
2024-11-04 03:39:24 +00:00
d289c773d0 Change --max-time arg for http commands to use Duration type (#14237)
# Description
Fixes #14222. The ability to set duration unit for `--max-time` when using the `http`
command util.

Signed-off-by: Alex Johnson <alex.kattathra.johnson@gmail.com>
2024-11-03 18:35:08 +00:00
a935e0720f no deref in touch (#14214)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
Adds --no-deref flag to `touch`. Nice and backwards compatible, and I
get to touch symlinks. I still don't get to set their dates directly,
but maybe that'll come with utouch.

Some sadness in the implementation, since `set_symlink_file_times`
doesn't take Option values and we call it twice with the old "read"
values from reference (or now, if missing). This shouldn't be a big
concern since `touch` already did two calls if you set both mtime and
atime. Also, `--no-deref` applies both to the reference file, and to the
target file. No splitting them up, because that's silly.

Can always bikeshed. I nicked `--no-deref` from the uutils flag, and
made the short flag `-d` because it obviously can't be `-h`. I thought
of `-S` like in `glob`, for the "negative/filter out" uppercase short
letters. Ultimately I don't think it matters much.

Should fix #14212 since it's not really tied to uutils, besides the
comment about setting a `datetime` value directly.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
New flag.

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
Maybe.

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-11-03 00:56:05 -04:00
1c3ff179bc Improve CellPath display output (#14197)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

Fixes: #13362

This PR fixes the `Display` impl for `CellPath`, as laid out in #13362
and #14090:

```nushell
> $.0."0"
$.0."0"

> $."foo.bar".baz
$."foo.bar".baz
```

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

Cell-paths are now printed using the same `$.` notation that is used to
create them, and ambiguous column names are properly quoted.

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-11-02 10:28:10 -05:00
ccab3d6b6e Improve comment wording in run_external.rs (#14230)
verb 'setup' -> 'set up'

setup as verb [is a misspelling of set
up](https://en.wiktionary.org/wiki/setup#Verb)

* [verb: set up](https://en.wiktionary.org/wiki/set_up)
* [noun: setup](https://en.wiktionary.org/wiki/setup)

*I split this from #14229 typo corrections because 'setup' is not as
clear-cut wrong. Having read the dictionary pages (linked) I'm even more
confident in this change being correct rather than only subjectively
better.*

Co-authored-by: Stefan Holderbach <sholderbach@users.noreply.github.com>
2024-11-01 18:02:25 +01:00
3e39fae6e1 Fix comment typos in run_external.rs (#14229) 2024-11-01 17:55:21 +01:00
d575fd1c3a Tests for new Alpine and Debian image builds (#14225) 2024-11-01 07:45:20 +08:00
0a2fb137af don't run subcommand if it's surrounded with backtick quote (#14210)
# Description
Fixes: #14202
After looking into the issue, I think #13910 it's not good to cut the
span if it's in external argument.
This pr is somehow revert the change, and fix
https://github.com/nushell/nushell/issues/13431 in another way.

It introduce a new state named `State::BackTickQuote`, so if an external
arg include backtick quote, it enters the state, so backtick quote won't
be the body of a string.

# User-Facing Changes
### Before
```nushell
> ^echo `(echo aa)`
aa
> ^echo `"aa"`   # maybe it's not right to remove the inner quote.
aa
```
### After
```nushell
> ^echo `(echo aa)`
(echo aa)
> ^echo `"aa"`    # inner quote is keeped if there are backtick quote outside.
"aa"
```

# Tests + Formatting
Added 3 tests.
2024-10-31 16:13:05 +01:00
4907575d3d Bump chrono-tz from 0.8.6 to 0.10.0 (#14205)
Bumps [chrono-tz](https://github.com/chronotope/chrono-tz) from 0.8.6 to
0.10.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/chronotope/chrono-tz/releases">chrono-tz's
releases</a>.</em></p>
<blockquote>
<h2>chrono-tz v0.10.0: 2024b</h2>
<p><strong>TZDB</strong> version 2024b (2024-09-05).</p>
<h2>Changes</h2>
<ul>
<li>Make <code>OffsetName::abbreviation</code> return an
<code>Option</code>.
This reflects that numeric values such as <code>+11</code> are no longer
encoded in the upstream TZDB as abbreviations (<a
href="https://redirect.github.com/chronotope/chrono-tz/issues/185">#185</a>).</li>
</ul>
<h2>TZDB 2024b</h2>
<blockquote>
<p>The 2024b release of the tz code and data is available.</p>
<p>This release is prompted by the accumulated weight of many non-urgent
changes to both code and data. It changes one timestamp abbreviation,
for the long-obsolete System V setting TZ='MET'; see below. Otherwise,
the timestamps affected by this release all predate April 2008, so you
can skip this release if your application uses only tzdata and does not
use older timestamps.</p>
<p>This release contains the following changes:</p>
<h3>Briefly:</h3>
<p>Improve historical data for Mexico, Mongolia, and Portugal.
System V names are now obsolescent.
The main data form now uses %z.
The code now conforms to RFC 8536 for early timestamps.
Support POSIX.1-2024, which removes asctime_r and ctime_r.
Assume POSIX.2-1992 or later for shell scripts.
SUPPORT_C89 now defaults to 1.</p>
<h3>Changes to past timestamps</h3>
<p>Asia/Choibalsan is now an alias for Asia/Ulaanbaatar rather than
being a separate Zone with differing behavior before April 2008. This
seems better given our wildly conflicting information about Mongolia's
time zone history. (Thanks to Heitor David Pinto.)</p>
<p>Historical transitions for Mexico have been updated based on official
Mexican decrees. The affected timestamps occur during the years
1921-1927, 1931, 1945, 1949-1970, and 1981-1997. The affected zones are
America/Bahia_Banderas, America/Cancun, America/Chihuahua,
America/Ciudad_Juarez, America/Hermosillo, America/Mazatlan,
America/Merida, America/Mexico_City, America/Monterrey, America/Ojinaga,
and America/Tijuana. (Thanks to Heitor David Pinto.)</p>
<p>Historical transitions for Portugal, represented by Europe/Lisbon,
Atlantic/Azores, and Atlantic/Madeira, have been updated based on a
close reading of old Portuguese legislation, replacing previous data
mainly originating from Whitman and Shanks &amp; Pottenger. These
changes affect a few transitions in 1917-1921, 1924, and 1940 throughout
these regions by a few hours or days, and various timestamps between
1977 and 1993 depending on the region. In particular, the Azores and
Madeira did not observe DST from 1977 to 1981. Additionally, the
adoption of standard zonal time in former Portuguese colonies have been
adjusted: Africa/Maputo in 1909, and Asia/Dili by 22 minutes at the
start of 1912. (Thanks to Tim Parenti.)</p>
<h3>Changes to past tm_isdst flags</h3>
<p>The period from 1966-04-03 through 1966-10-02 in Portugal is now
modeled as DST, to more closely reflect how contemporaneous changes in
law entered into force.</p>
<h3>Changes to data</h3>
<p>Names present only for compatibility with UNIX System V (last
released in the 1990s) have been moved to 'backward'. These names, which
for post-1970 timestamps mostly just duplicate data of geographical
names, were confusing downstream uses. Names moved to 'backward' are now
links to geographical names. This affects behavior for TZ='EET' for some
pre-1981 timestamps, for TZ='CET' for some pre-1947 timestamps, and for
TZ='WET' for some pre-1996 timestamps. Also, TZ='MET' now behaves like
TZ='CET' and so uses the abbreviation &quot;CET&quot; rather than
&quot;MET&quot;. Those needing the previous TZDB behavior, which does
not match any real-world clocks, can find the old entries in 'backzone'.
(Problem reported by Justin Grant.)</p>
<p>The main source files' time zone abbreviations now use %z, supported
by zic since release 2015f and used in vanguard form since release
2022b. For example, America/Sao_Paulo now contains the zone continuation
line &quot;-3:00 Brazil %z&quot;, which is less error prone than the old
&quot;-3:00 Brazil -03/-02&quot;. This does not change the represented
data: the generated TZif files are unchanged. Rearguard form still
avoids %z, to support obsolescent parsers.</p>
<p>Asia/Almaty has been removed from zonenow.tab as it now agrees with
Asia/Tashkent for future timestamps, due to Kazakhstan's 2024-02-29 time
zone change. Similarly, America/Scoresbysund has been removed, as it now
agrees with America/Nuuk due to its 2024-03-31 time zone change.</p>
</blockquote>
<h2>chrono-tz v0.9.0: 2024a</h2>
<p><strong>TZDB</strong> version <a
href="https://mm.icann.org/pipermail/tz-announce/2024-February/000081.html">2024a</a>
(2024-02-01).</p>
<h2>Changes</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="8450e59393"><code>8450e59</code></a>
Bump chrono-tz-build to 0.4.0</li>
<li><a
href="6154fd2513"><code>6154fd2</code></a>
Bump version number to 0.10.0</li>
<li><a
href="2d0c489781"><code>2d0c489</code></a>
Update tz submodule to 2024b</li>
<li><a
href="0acd431a54"><code>0acd431</code></a>
Make timezone name optional</li>
<li><a
href="2e955104a8"><code>2e95510</code></a>
Test numeric time zone names</li>
<li><a
href="2a675e2f84"><code>2a675e2</code></a>
chrono-tz-build: resolve fixme</li>
<li><a
href="ca52db80b5"><code>ca52db8</code></a>
chrono-tz-build: make phf gated by case-insensitive feature</li>
<li><a
href="e2f4215139"><code>e2f4215</code></a>
Prepare parse-zoneinfo v0.3.1</li>
<li><a
href="e6f87e20cc"><code>e6f87e2</code></a>
Symlink LICENSE in parse-zoneinfo</li>
<li><a
href="6e58ce21ca"><code>6e58ce2</code></a>
Remove <code>AsciiExt</code> import</li>
<li>Additional commits viewable in <a
href="https://github.com/chronotope/chrono-tz/compare/v0.8.6...v0.10.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=chrono-tz&package-manager=cargo&previous-version=0.8.6&new-version=0.10.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-30 10:14:04 -07:00
4200df21d3 Add RELEASE_QUERY_API build arg for Dockerfiles (#14209)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

Add `RELEASE_QUERY_API` build arg for all Dockerfiles, with default
value set to
`https://api.github.com/repos/nushell/nushell/releases/latest`, So that
we can build the nightly images with the same Dockerfile but a different
`RELEASE_QUERY_API` build arg.

A nightly image build with the new Dockerfile could be found here:
https://github.com/orgs/nushell/packages/container/nushell/297473460?tag=nightly

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

The default behavior keep the same as before

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->

Those who want to build a docker image with the nushell nightly release
installed could run:
```nu
let queryApi = http get https://api.github.com/repos/nushell/nightly/releases | sort-by -r created_at | get 0.url
docker buildx build --build-arg $'RELEASE_QUERY_API=($queryApi)' ...
```
2024-10-30 06:45:15 -05:00
Dom
e0bb5a2bd2 Allow using function keys F21-F35 for keybindings (#14201)
I feel like the limitations on what can be bound are too strict.

if an app _does_ support the Kitty keyboard protocol (Neovim,
Reedline), I can map the function keys (F27-F35 as listed below).

In Reedline everything works perfectly. The issue is for some reason we
limit the keys that can be bound in Nushell, so I am unable to do that.
2024-10-30 12:22:47 +01:00
a6c2c685bc Bump trash from 5.1.1 to 5.2.0 (#14206)
Bumps [trash](https://github.com/ArturKovacs/trash) from 5.1.1 to 5.2.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/ArturKovacs/trash/releases">trash's
releases</a>.</em></p>
<blockquote>
<h2>v5.2.0</h2>
<h3>New Features</h3>
<ul>
<li>
<p>Short circuiting check for empty trash
<code>is_empty()</code> is a short circuiting function that checks if
the trash is
empty on Freedesktop compatible systems and Windows.</p>
<p>The main purpose of <code>is_empty()</code> is to avoid evaluating
the entire trash
context when the caller is only interested in whether the trash is empty
or not. This is especially useful for full trashes with many items.</p>
</li>
</ul>
<h3>Commit Statistics</h3>
<ul>
<li>2 commits contributed to the release.</li>
<li>56 days passed between releases.</li>
<li>1 commit was understood as <a
href="https://www.conventionalcommits.org">conventional</a>.</li>
<li>0 issues like '(#ID)' were seen in commit messages</li>
</ul>
<h3>Commit Details</h3>
<!-- raw HTML omitted -->
<!-- raw HTML omitted -->
<ul>
<li><strong>Uncategorized</strong>
<ul>
<li>Merge pull request <a
href="https://redirect.github.com/ArturKovacs/trash/issues/120">#120</a>
from joshuamegnauth54/feat-short-circuiting-is-empty (0120bbe)</li>
<li>Short circuiting check for empty trash (6d59fa9)</li>
</ul>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/Byron/trash-rs/blob/master/CHANGELOG.md">trash's
changelog</a>.</em></p>
<blockquote>
<h2>5.2.0 (2024-10-26)</h2>
<h3>New Features</h3>
<ul>
<li>
<p><!-- raw HTML omitted --> Short circuiting check for empty trash
<code>is_empty()</code> is a short circuiting function that checks if
the trash is
empty on Freedesktop compatible systems and Windows.</p>
<p>The main purpose of <code>is_empty()</code> is to avoid evaluating
the entire trash
context when the caller is only interested in whether the trash is empty
or not. This is especially useful for full trashes with many items.</p>
</li>
</ul>
<h3>Commit Statistics</h3>
<!-- raw HTML omitted -->
<ul>
<li>2 commits contributed to the release.</li>
<li>56 days passed between releases.</li>
<li>1 commit was understood as <a
href="https://www.conventionalcommits.org">conventional</a>.</li>
<li>0 issues like '(#ID)' were seen in commit messages</li>
</ul>
<h3>Commit Details</h3>
<!-- raw HTML omitted -->
<!-- raw HTML omitted -->
<ul>
<li><strong>Uncategorized</strong>
<ul>
<li>Merge pull request <a
href="https://redirect.github.com/ArturKovacs/trash/issues/120">#120</a>
from joshuamegnauth54/feat-short-circuiting-is-empty (<a
href="0120bbe668"><code>0120bbe</code></a>)</li>
<li>Short circuiting check for empty trash (<a
href="6d59fa9394"><code>6d59fa9</code></a>)</li>
</ul>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1a0fc5908a"><code>1a0fc59</code></a>
Release trash v5.2.0</li>
<li><a
href="0120bbe668"><code>0120bbe</code></a>
Merge pull request <a
href="https://redirect.github.com/ArturKovacs/trash/issues/120">#120</a>
from joshuamegnauth54/feat-short-circuiting-is-empty</li>
<li><a
href="6d59fa9394"><code>6d59fa9</code></a>
feat: Short circuiting check for empty trash</li>
<li>See full diff in <a
href="https://github.com/ArturKovacs/trash/compare/v5.1.1...v5.2.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=trash&package-manager=cargo&previous-version=5.1.1&new-version=5.2.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-30 10:51:30 +08:00
1e2fa68db0 Bump fancy-regex from 0.13.0 to 0.14.0 (#14207)
Bumps [fancy-regex](https://github.com/fancy-regex/fancy-regex) from
0.13.0 to 0.14.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/fancy-regex/fancy-regex/releases">fancy-regex's
releases</a>.</em></p>
<blockquote>
<h2>0.14.0</h2>
<h3>Added</h3>
<ul>
<li>Add <code>split</code>, <code>splitn</code> methods to
<code>Regex</code> to split a string into substrings (<a
href="https://redirect.github.com/fancy-regex/fancy-regex/issues/140">#140</a>)</li>
<li>Add <code>case_insensitive</code> method to
<code>RegexBuilder</code> to force case-insensitive mode (<a
href="https://redirect.github.com/fancy-regex/fancy-regex/issues/132">#132</a>)</li>
</ul>
<h3>Changed</h3>
<ul>
<li>Bump bit-set dependency to 0.8 (<a
href="https://redirect.github.com/fancy-regex/fancy-regex/issues/139">#139</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/fancy-regex/fancy-regex/blob/main/CHANGELOG.md">fancy-regex's
changelog</a>.</em></p>
<blockquote>
<h2>[0.14.0] - 2024-10-24</h2>
<h3>Added</h3>
<ul>
<li>Add <code>split</code>, <code>splitn</code> methods to
<code>Regex</code> to split a string into substrings (<a
href="https://redirect.github.com/fancy-regex/fancy-regex/issues/140">#140</a>)</li>
<li>Add <code>case_insensitive</code> method to
<code>RegexBuilder</code> to force case-insensitive mode (<a
href="https://redirect.github.com/fancy-regex/fancy-regex/issues/132">#132</a>)</li>
</ul>
<h3>Changed</h3>
<ul>
<li>Bump bit-set dependency to 0.8 (<a
href="https://redirect.github.com/fancy-regex/fancy-regex/issues/139">#139</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="810a8f3c16"><code>810a8f3</code></a>
Version 0.14.0</li>
<li><a
href="33597bdd7b"><code>33597bd</code></a>
Merge pull request <a
href="https://redirect.github.com/fancy-regex/fancy-regex/issues/145">#145</a>
from fancy-regex/bump-tarpaulin</li>
<li><a
href="1a6c0f813d"><code>1a6c0f8</code></a>
Bump tarpaulin</li>
<li><a
href="2f0f000de9"><code>2f0f000</code></a>
Merge pull request <a
href="https://redirect.github.com/fancy-regex/fancy-regex/issues/144">#144</a>
from k94-ishi/dev/splitn</li>
<li><a
href="689a845112"><code>689a845</code></a>
Merge pull request <a
href="https://redirect.github.com/fancy-regex/fancy-regex/issues/132">#132</a>
from jonperry-dev/casing_option</li>
<li><a
href="f0183b46a6"><code>f0183b4</code></a>
fix check</li>
<li><a
href="988b357493"><code>988b357</code></a>
fmt</li>
<li><a
href="52105243c1"><code>5210524</code></a>
moved tests to tests/regex_options.rs</li>
<li><a
href="ce4ab06ee3"><code>ce4ab06</code></a>
fmt</li>
<li><a
href="1039f71083"><code>1039f71</code></a>
added self to authors</li>
<li>Additional commits viewable in <a
href="https://github.com/fancy-regex/fancy-regex/compare/0.13.0...0.14.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=fancy-regex&package-manager=cargo&previous-version=0.13.0&new-version=0.14.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-30 10:51:21 +08:00
599f16f15c Bump unicase from 2.7.0 to 2.8.0 (#14208)
Bumps [unicase](https://github.com/seanmonstar/unicase) from 2.7.0 to
2.8.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d98191176d"><code>d981911</code></a>
v2.8.0</li>
<li><a
href="b825f9ed9f"><code>b825f9e</code></a>
upgrade to unicode 16</li>
<li><a
href="a86a4669aa"><code>a86a466</code></a>
update to 2018 edition</li>
<li><a
href="8dc84ec6f1"><code>8dc84ec</code></a>
Make license metadata SPDX compliant</li>
<li><a
href="07f81c14cd"><code>07f81c1</code></a>
feat: add to_folded_case() method</li>
<li>See full diff in <a
href="https://github.com/seanmonstar/unicase/compare/v2.7.0...v2.8.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=unicase&package-manager=cargo&previous-version=2.7.0&new-version=2.8.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-30 10:51:06 +08:00
91da168251 Bump crate-ci/typos from 1.26.0 to 1.26.8 (#14203)
Bumps [crate-ci/typos](https://github.com/crate-ci/typos) from 1.26.0 to
1.26.8.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/crate-ci/typos/releases">crate-ci/typos's
releases</a>.</em></p>
<blockquote>
<h2>v1.26.8</h2>
<h2>[1.26.8] - 2024-10-24</h2>
<h2>v1.26.3</h2>
<h2>[1.26.3] - 2024-10-24</h2>
<h3>Fixes</h3>
<ul>
<li>Accept <code>additionals</code></li>
</ul>
<h2>v1.26.2</h2>
<h2>[1.26.2] - 2024-10-24</h2>
<h3>Fixes</h3>
<ul>
<li>Accept <code>tesselate</code> variants</li>
</ul>
<h2>v1.26.1</h2>
<h2>[1.26.1] - 2024-10-23</h2>
<h3>Fixes</h3>
<ul>
<li>Respect <code>--force-exclude</code> for binary files</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/crate-ci/typos/blob/master/CHANGELOG.md">crate-ci/typos's
changelog</a>.</em></p>
<blockquote>
<h2>[1.26.8] - 2024-10-24</h2>
<h2>[1.26.7] - 2024-10-24</h2>
<h2>[1.26.6] - 2024-10-24</h2>
<h2>[1.26.5] - 2024-10-24</h2>
<h2>[1.26.4] - 2024-10-24</h2>
<h2>[1.26.3] - 2024-10-24</h2>
<h3>Fixes</h3>
<ul>
<li>Accept <code>additionals</code></li>
</ul>
<h2>[1.26.2] - 2024-10-24</h2>
<h3>Fixes</h3>
<ul>
<li>Accept <code>tesselate</code> variants</li>
</ul>
<h2>[1.26.1] - 2024-10-23</h2>
<h3>Fixes</h3>
<ul>
<li>Respect <code>--force-exclude</code> for binary files</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="0d9e0c2c1b"><code>0d9e0c2</code></a>
chore: Release</li>
<li><a
href="e5385b07a0"><code>e5385b0</code></a>
chore(ci): Fix new release process</li>
<li><a
href="f08d1171e2"><code>f08d117</code></a>
chore: Release</li>
<li><a
href="e6e172498c"><code>e6e1724</code></a>
chore(ci): Fix new release process</li>
<li><a
href="02afc59fd4"><code>02afc59</code></a>
chore: Release</li>
<li><a
href="f981a1cd20"><code>f981a1c</code></a>
chore(ci): Fix new release process</li>
<li><a
href="afbc96c5d3"><code>afbc96c</code></a>
chore: Release</li>
<li><a
href="d3dcaaeb2d"><code>d3dcaae</code></a>
chore(ci): Fix new release process</li>
<li><a
href="fb8217bd5e"><code>fb8217b</code></a>
chore: Release</li>
<li><a
href="88ea8ea67d"><code>88ea8ea</code></a>
chore(ci): Stage releases until done</li>
<li>Additional commits viewable in <a
href="https://github.com/crate-ci/typos/compare/v1.26.0...v1.26.8">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=crate-ci/typos&package-manager=github_actions&previous-version=1.26.0&new-version=1.26.8)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-30 10:50:41 +08:00
e104bccfb9 Drop once_cell dependency (#14198)
This PR drops the `once_cell` dependency from all Nu crates, replacing
uses of the
[`Lazy`](https://docs.rs/once_cell/latest/once_cell/sync/struct.Lazy.html)
type with its `std` equivalent,
[`LazyLock`](https://doc.rust-lang.org/std/sync/struct.LazyLock.html).
2024-10-29 17:33:46 +01:00
74bd0e32cc ansi -l includes previews of attributes (e.g., bold, dimmed, blink, etc.) (#14196)
# Description

A few simple changes:

* Extends the range of previews to include the attributes - Bold,
italic, underline, etc.
* Also resets the colors before *every* preview. Previously we weren't
doing this, so the "string" theme color was bleeding into a few previews
(mostly, if not all, `bg` ones). Now the "default foreground" color is
used for any preview without an explicit foreground color.
* Moves the preview code into the `if use_ansi_coloring` block as a
stupid-nitpick optimization. There's no reason to populate the previews
when they are explicitly not shown with `use_ansi_coloring: false`.
* Moves `reset` to the bottom of the attribute list so that it isn't
previewed. This is a bit of a nitpick as well since internally we send
the same code for both a `reset` and `attr_normal` (which is correct),
but semantically a `reset` doesn't seem like a "previewable" thing,
whereas "normal" text can be demonstrated with a preview.

# User-Facing Changes

`ansi -l` now shows additional previews

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting

N/A
2024-10-29 09:14:54 -05:00
03015ed33f Show ? for optional entries when displaying CellPaths (#14042)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

This PR makes the `Display` implementation for `CellPath` show a `?`
suffix on every optional entry, which makes the output consistent with
the language syntax.

Before this PR, the printing of cell paths was confusing, e.g. `$.x` and
`$.x?` were both printed as `x`. Now, the second one is printed as `x?`.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

The formatting of cell paths now matches the syntax used to create them,
reducing confusion.

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

All tests pass, including `stdlib` tests.

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-10-29 08:08:55 -05:00
79ea70d4ec Fix quoting in to nuon and refactor quoting functions (#14180)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

This PR fixes the quoting and escaping of column names in `to nuon`.
Before the PR, column names with quotes inside them would get quoted,
but not escaped:

```nushell
> { 'a"b': 2 } | to nuon
{ "a"b": 2 }

> { 'a"b': 2 } | to nuon | from nuon
Error:   × error when loading nuon text
   ╭─[entry #1:1:27]
 1 │ { "a\"b": 2 } | to nuon | from nuon
   ·                           ────┬────
   ·                               ╰── could not load nuon text
   ╰────

Error:   × error when parsing nuon text
   ╭─[entry #1:1:27]
 1 │ { "a\"b": 2 } | to nuon | from nuon
   ·                           ────┬────
   ·                               ╰── could not parse nuon text
   ╰────

Error:   × error when parsing
   ╭────
 1 │ {"a"b": 2}
   ·          ┬
   ·          ╰── Unexpected end of code.
   ╰────

> [['a"b']; [2] [3]] | to nuon
[["a"b"]; [2], [3]]

> [['a"b']; [2] [3]] | to nuon | from nuon
Error:   × error when loading nuon text
   ╭─[entry #1:1:32]
 1 │ [['a"b']; [2] [3]] | to nuon | from nuon
   ·                                ────┬────
   ·                                    ╰── could not load nuon text
   ╰────

Error:   × error when parsing nuon text
   ╭─[entry #1:1:32]
 1 │ [['a"b']; [2] [3]] | to nuon | from nuon
   ·                                ────┬────
   ·                                    ╰── could not parse nuon text
   ╰────

Error:   × error when parsing
   ╭────
 1 │ [["a"b"]; [2], [3]]
   ·                   ┬
   ·                   ╰── Unexpected end of code.
   ╰────
```

After this PR, the quote is escaped properly:

```nushell
> { 'a"b': 2 } | to nuon
{ "a\"b": 2 }

> { 'a"b': 2 } | to nuon | from nuon
╭─────┬───╮
│ a"b │ 2 │
╰─────┴───╯

> [['a"b']; [2] [3]] | to nuon
[["a\"b"]; [2], [3]]

> [['a"b']; [2] [3]] | to nuon | from nuon
╭─────╮
│ a"b │
├─────┤
│   2 │
│   3 │
╰─────╯
```

The cause of the issue was that `to nuon` simply wrapped column names in
`'"'` instead of calling `escape_quote_string`.

As part of this change, I also moved the functions related to quoting
(`needs_quoting` and `escape_quote_string`) into `nu-utils`, since
previously they were defined in very ad-hoc places (and, in the case of
`escape_quote_string`, it was defined multiple times with the same
body!).

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

`to nuon` now properly escapes quotes in column names.

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

All tests pass, including workspace and stdlib tests.

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-10-29 07:43:26 -05:00
3ec76af96e Add Debian Dockerfile (#14193)
# Description

Add Dockerfile for Debian/Ubuntu images.
Related to #14171 and PR #14191

This is largely similar to the Alpine version, however there are some
minor differences:
- I've specially added Debian Bookworm here to provide some stability
when new major versions are released. We can bump the (LTS only perhaps)
versions supported as needed.
- I moved the creation of the nushell user until later to avoid a
warning about the nu binary not (yet) being available.
- Debian doesn't come with wget or curl. I've added wget to be similar
to Alpine. I tried creating a multi-layer version to avoid installing
wget (reduced attack surface) but the image was bigger due to the extra
layer, so didn't seem worth being different.

I can transfer the relevant changes to the Alpine image if we want to
keep them easily diffable?

# User-Facing Changes

While this provides a Debian image by default. An Ubuntu image can be
created from this by changing to `FROM ubuntu:noble`. We could later
supply that as an optional argument from the build workflow to be able
to build different distros and supported versions.

# Tests + Formatting

The images produced for Debian/Ubuntu are ~75Mb bigger as listed in
`docker images`:

```
REPOSITORY                                       TAG             IMAGE ID       CREATED        SIZE
nu-alpine                                        latest          71c0216eddd9   44 years ago   167MB
nu-debian                                        latest          cce3d91fc77c   44 years ago   243MB
nu-ubuntu                                        latest          ce90497da806   44 years ago   240MB
```

I've tested a few nu commands, including polars. It seems to work okay.
It makes sense to add some container-based tests once the workflows are
available. I'll probably pick that up later when @hustcer has completed
the migration of his workflows. Perhaps invoking a nushell-based test
suite if one is available. The toolkit seems to rely on cargo and the
source being available, which of course won't work here.
2024-10-29 20:40:23 +08:00
b8efd2a347 ansi name for clear-scrollback code (#14184)
Related to #14181

# Description

Our understanding of `ESC[3J` has apparently been wrong. And I say "our"
because I posted a [Super User
answer](https://superuser.com/a/1738611/1210833) a couple of years ago
with the same misconception (now fixed). In addition, the [crossterm
crate
doc](https://docs.rs/crossterm/latest/crossterm/terminal/enum.ClearType.html)
is wrong on the topic.

`ESC[3J` doesn't clear the screen plus the scrollback; it *only* clears
the scrollback. Reference the official [Xterm Control Sequences
doc](https://www.xfree86.org/4.8.0/ctlseqs.html).

> CSI P s J
> 
> Erase in Display (ED)
> 
> P s = 0 → Erase Below (default)
> P s = 1 → Erase Above
> P s = 2 → Erase All
> P s = 3 → Erase Saved Lines (xterm)

This also means that:

```nu
$"(ansi clear_entire_screen_plus_buffer)"
```

... doesn't.

This PR updates it to `ansi clear_scrollback_buffer` (short-code remains
the same).

# User-Facing Changes

Breaking-change: `ansi clear_entire_screen_plus_buffer` is renamed `ansi
clear_scrollback_buffer`

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting

Self-documenting command via `ansi -l`
2024-10-29 07:01:32 -05:00
9083157baa support table literal syntax in join right-table argument (#14190)
# Description

Makes `join` `right-table` support table literal notation instead of
parsing the column list (treated as empty data):

```diff
[{a: 1}] | join [[a]; [1]] a | to nuon
-[]
+[[a]; [1]]
```

Fixes #13537, fixes #14134
2024-10-29 06:37:44 -05:00
6cdc9e3b77 Fix LSP non-ascii characters offset issues. (#14002)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->
This PR is supposed to fix #13582, #11522, as well as related goto
definition/reference issues (wrong position if non ascii characters
ahead).

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

<img width="411" alt="image"
src="https://github.com/user-attachments/assets/9a81953c-81b2-490d-a842-14ccaefd6972">

Changes:
1. span/completion should use byte offset instead of character index
2. lsp Postions related ops in Ropey remain to use character index

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

Should be none, tested in neovim with config:
```lua
require("lspconfig").nushell.setup({
  cmd = {
    "nu",
    "-I",
    vim.fn.getcwd(),
    "--no-config-file",
    "--lsp",
  },
  filetypes = { "nu" },
})
```

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

tests::complete_command_with_utf_line parameters fixed to align with
true lsp requests (in character index, not byte).
As for the issue_11522.nu, manually tested:

<img width="520" alt="image"
src="https://github.com/user-attachments/assets/45496ba8-5a2d-4998-9190-d7bde31ee72c">


# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-10-29 06:35:37 -05:00
f8d4adfb7a Add the history import command (again) (#14083)
# Description

This is mainly https://github.com/nushell/nushell/pull/13450 (which got
reverted). Additionally:
 - always clear IDs on import, disallow specifying IDs when piping
 - added extra tests
 - create backup of the history

# User-Facing Changes

New command: `history import`

# Tests + Formatting

Added mostly integration tests and a few smaller unit tests.
2024-10-29 06:34:48 -05:00
719d9aa83c provide a common implementation for query string conversions in url join and url build-query (#14173)
Addresses one of the points in #14162

# Description

Factors out part of the `url::build_query::to_url` function into a
separate function `url::query::record_to_qs()`, which is then used in
both `url::build_query` and `url::join`.

# User-Facing Changes

Like with `url build-query` (after #14073), `url join` will allow list
values in `params` and behavior of two commands will be same.

```nushell
> {a: ["one", "two"], b: "three"} | url build-query
"a=one&a=two&b=three"

> {scheme: "http", host: "host", params: {a: ["one", "two"], b: "three"}} | url join 
"http://host?a=one&a=two&b=three"
```

# Tests + Formatting

Added an example to `url join` for the new behavior.
2024-10-29 06:33:14 -05:00
9ebaa737aa feat: stor insert accepts lists (#14175)
Closes #11433 
# Description

This feature implements passing a list into `stor insert` through
pipeline.
```bash
stor create --table-name nudb --columns {bool1: bool, int1: int, float1: float} ;
[[bool1 int1 float1]; [true 5 1.1], [false 8 3.14]] | stor insert --table-name nudb
```
```bash
stor create --table-name files --columns {name: str, type: str, size: int, modified: datetime} ;
ls | stor insert --table-name files
 ```

# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
2024-10-29 06:32:55 -05:00
88b0982dac allow oem code pages to be used to decode text (#14187)
# Description

This PR allows oem code pages to be used in decoding by specifying the
code page number.

## Before

![image](https://github.com/user-attachments/assets/27f5d288-49f1-4743-a2fc-154f5291d190)
## After (umlauts)

![image](https://github.com/user-attachments/assets/d37c11be-b1fe-4159-822d-7d38018e1c57)

closes https://github.com/nushell/nushell/issues/14168

I abstracted the decoding a bit. Here are my function comments on
how/why.
```rust
// Since we have two different decoding mechanisms, we allow oem_cp to be
// specified by only a number like `open file | decode 850`. If this decode
// parameter parses as a usize then we assume it was intentional and use oem_cp
// crate. Otherwise, if it doesn't parse as a usize, we assume it was a string
// and use the encoding_rs crate to try and decode it.
```

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-10-29 06:32:35 -05:00
8c2e12ad79 Update the dockerfile for alpine image (#14191)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

Update the dockerfile for alpine image, related
https://github.com/nushell/nushell/issues/14171 :

1.  Add armv7 arch support
2. Add more opencontainers labels, see:
https://github.com/opencontainers/image-spec/blob/main/annotations.md#pre-defined-annotation-keys

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

- Nushell docker images will be provided for each official release
- There will be a `nushell:nightly` tag for the latest nightly release

# After Submitting

1. The workflow that build the images will be provided in
[nushell/nightly](https://github.com/nushell/nightly) repo later
2. Alpine image will be provided as the default, we may also add Debian
images later
2024-10-29 06:32:09 -05:00
2c31b3db07 Ensure default config files end with a new line (#14192)
# Description

This allows users to immediately add new lines to the configuration in
the usual way.
2024-10-29 10:12:55 +01:00
eedf833b6f Send both 2J and 3J on clear (#14181)
Fixes #14176

# Description

Since the Linux `/usr/bin/clear` binary doesn't exhibit the issue in
#14176, I checked to see what ANSI escapes it is emitting:

```nu
nu -c '^clear; "111\n222\n333"' | less
# or
bash -c 'clear -x; echo -e "111\n222\n333"' | less
```

Both show the same thing:

```
ESC[HESC[2JESC[3J111
222
333
(END)
```

This is the equivalent of:

```nu
$"(ansi home)(ansi clear_entire_screen)(ansi clear_entire_screen_plus_buffer)111\n222\n333"
```

However, our internal `clear` is sending only the Home and 3J. While
this *should*, in theory, work, it's (a) clear that it doesn't, and (b)
`/usr/bin/clear` seemingly knows this and already has the solution (or
at least workaround). From looking at the `ncurses` source, it appears
it is getting this information from the terminal capabilities. That
said, support for `2J` and `3J` is fairly universal, and it's what we
send in `clear` and `clear --keep-scrollback` anyway, so there's no harm
AFAICT in sending both like `/usr/bin/clear` does.

Also tested and fixes the issue on Windows. Note that PowerShell
`Clear-Host` also did not have the issue.

Side-note: It's interesting that on Tmux, which doesn't support 2J and
3J, that `/usr/bin/clear` knows this and doesn't send those codes,
sending just an escape-[J instead. However, Nushell's `clear`, of
course, isn't checking terminal capabilities, and is continuing to send
the unsupported codes. Fortunately this doesn't appear to cause any
issues on Tmux.

# User-Facing Changes

None, AFAICT - Bugfix only.

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting

N/A
2024-10-28 06:42:18 -05:00
69d81cc065 add command_type to help (#14165)
# Description

This PR adds an indicator when listing subcommands. That indicator tells
whether the command is a plugin, alias, or custom_command.

![image](https://github.com/user-attachments/assets/02889f8a-17b4-4678-bb44-3a487b3d1066)

I changed some of the API to make this work a little easier, namely
`get_signatures()` is now `get_signatures_and_declids()`. It was used in
only one other place (run-external), so I thought it was fine to change
it.

There is a long-standing issue with aliases where they reference the
command name instead of the alias name. This PR doesn't fix that bug.
Example.
```nushell
❯ alias "str fill" = str wrap
```
```nushell
❯ str
... other stuff
Subcommands:

  str wrap (alias) - Alias for `str wrap`
  str wrap (plugin) - Wrap text passed into pipeline.

```


# User-Facing Changes
Slightly different output of subcommands.
2024-10-24 19:06:49 +02:00
af9c31152a Add metadata on open --raw with bytestreams (#14141)
# Description

This PR closes #14137 and allows the display hook to be set on byte
streams. So, with a hook like this below.
```nushell
display_output: {
    metadata access {|meta| match $meta.content_type? {
        "application/x-nuscript" | "application/x-nuon" | "text/x-nushell" => { nu-highlight },
        "application/json" => { ^bat --language=json --color=always --style=plain --paging=never },
        _ => {},
        }
    } | table
}
```
You could type `open toolkit.nu` and the text of toolkit.nu would be
highlighted by nu-highlight. This PR also changes the way content-type
is assigned with `open`. Previously it would only assign it if `--raw`
was specified.

Lastly, it changes the `is_external()` function to only say
`ByteStreamSource::Child`'s are external instead of both Child and
`ByteStreamSource::File`. Again, this was to allow the hook to function
properly. I'm not sure what negative ramifications changing
`is_external()` could have, but there may be some?

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-10-23 16:50:15 -05:00
abb6fca5e3 make adding newlines with to text more consistent and opt-out-able (#14158)
# Description

This PR tries to make `to text` more consistent with how it adds
newlines and also gives you an opt-out --no-newline option.

![image](https://github.com/user-attachments/assets/e4976ce6-c685-47a4-8470-4947970daf47)


I wasn't sure how to change the `PipelineData::ByteStream` match arm. I
figure something needs to be done there but I'm not sure how to do it.


# User-Facing Changes
newlines are more consistent.

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-10-23 16:49:51 -05:00
3ec1c40320 Introduce footer_inheritance option (#14070)
```nu
$env.config.table.footer_inheritance = true
```

close #14060
2024-10-23 19:45:47 +02:00
619211c1bf Bump brotli to 6.0.0 (#14161)
This deduplicates our dependency on `brotli`
2024-10-23 19:40:37 +02:00
3a685049da add name to $env.config.keybindings (#14159)
# Description

This PR adds the `name` column back to keybindings.


This may be considered a hack since the reedline keybinding has no spot
for name, but it seems to work.
2024-10-23 19:23:41 +02:00
ae54d05930 Upgrade to polars 0.43 (#14148)
Upgrades the polars plugin to polars version 0.43
2024-10-23 19:14:24 +02:00
e7c4597ad0 Bump uuid from 1.10.0 to 1.11.0 (#14155)
Bumps [uuid](https://github.com/uuid-rs/uuid) from 1.10.0 to 1.11.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/uuid-rs/uuid/releases">uuid's
releases</a>.</em></p>
<blockquote>
<h2>1.11.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Upgrade zerocopy to 0.8 by <a
href="https://github.com/yotamofek"><code>@​yotamofek</code></a> in <a
href="https://redirect.github.com/uuid-rs/uuid/pull/771">uuid-rs/uuid#771</a></li>
<li>Prepare for 1.11.0 release by <a
href="https://github.com/KodrAus"><code>@​KodrAus</code></a> in <a
href="https://redirect.github.com/uuid-rs/uuid/pull/772">uuid-rs/uuid#772</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/yotamofek"><code>@​yotamofek</code></a>
made their first contribution in <a
href="https://redirect.github.com/uuid-rs/uuid/pull/771">uuid-rs/uuid#771</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/uuid-rs/uuid/compare/1.10.0...1.11.0">https://github.com/uuid-rs/uuid/compare/1.10.0...1.11.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="4473398413"><code>4473398</code></a>
Merge pull request <a
href="https://redirect.github.com/uuid-rs/uuid/issues/772">#772</a> from
uuid-rs/cargo/1.11.0</li>
<li><a
href="59fbb1e695"><code>59fbb1e</code></a>
prepare for 1.11.0 release</li>
<li><a
href="d9b34e7c93"><code>d9b34e7</code></a>
Merge pull request <a
href="https://redirect.github.com/uuid-rs/uuid/issues/771">#771</a> from
yotamofek/zerocopy_0.8</li>
<li><a
href="14b24206c6"><code>14b2420</code></a>
Upgrade zerocopy to 0.8</li>
<li>See full diff in <a
href="https://github.com/uuid-rs/uuid/compare/1.10.0...1.11.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=uuid&package-manager=cargo&previous-version=1.10.0&new-version=1.11.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-23 13:54:16 +08:00
09c9495015 Bump bytes from 1.7.1 to 1.8.0 (#14156)
Bumps [bytes](https://github.com/tokio-rs/bytes) from 1.7.1 to 1.8.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/tokio-rs/bytes/releases">bytes's
releases</a>.</em></p>
<blockquote>
<h2>Bytes 1.8.0</h2>
<h1>1.8.0 (October 21, 2024)</h1>
<ul>
<li>Guarantee address in <code>split_off</code>/<code>split_to</code>
for empty slices (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/740">#740</a>)</li>
</ul>
<h2>Bytes 1.7.2</h2>
<h1>1.7.2 (September 17, 2024)</h1>
<h3>Fixed</h3>
<ul>
<li>Fix default impl of <code>Buf::{get_int, get_int_le}</code> (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/732">#732</a>)</li>
</ul>
<h3>Documented</h3>
<ul>
<li>Fix double spaces in comments and doc comments (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/731">#731</a>)</li>
</ul>
<h3>Internal changes</h3>
<ul>
<li>Ensure BytesMut::advance reduces capacity (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/728">#728</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/tokio-rs/bytes/blob/master/CHANGELOG.md">bytes's
changelog</a>.</em></p>
<blockquote>
<h1>1.8.0 (October 21, 2024)</h1>
<ul>
<li>Guarantee address in <code>split_off</code>/<code>split_to</code>
for empty slices (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/740">#740</a>)</li>
</ul>
<h1>1.7.2 (September 17, 2024)</h1>
<h3>Fixed</h3>
<ul>
<li>Fix default impl of <code>Buf::{get_int, get_int_le}</code> (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/732">#732</a>)</li>
</ul>
<h3>Documented</h3>
<ul>
<li>Fix double spaces in comments and doc comments (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/731">#731</a>)</li>
</ul>
<h3>Internal changes</h3>
<ul>
<li>Ensure BytesMut::advance reduces capacity (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/728">#728</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c45697ce42"><code>c45697c</code></a>
chore: prepare bytes v1.8.0 (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/741">#741</a>)</li>
<li><a
href="0ac54ca706"><code>0ac54ca</code></a>
Guarantee address in split_off/split_to for empty slices (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/740">#740</a>)</li>
<li><a
href="d7c1d658d9"><code>d7c1d65</code></a>
chore: prepare bytes v1.7.2 (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/736">#736</a>)</li>
<li><a
href="ac46ebdd46"><code>ac46ebd</code></a>
ci: update nightly to nightly-2024-09-15 (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/734">#734</a>)</li>
<li><a
href="79fb85323c"><code>79fb853</code></a>
fix: apply sign extension when decoding int (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/732">#732</a>)</li>
<li><a
href="291df5acc9"><code>291df5a</code></a>
Fix double spaces in comments and doc comments (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/731">#731</a>)</li>
<li><a
href="ed7d5ff39e"><code>ed7d5ff</code></a>
test: ensure BytesMut::advance reduces capacity (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/728">#728</a>)</li>
<li>See full diff in <a
href="https://github.com/tokio-rs/bytes/compare/v1.7.1...v1.8.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=bytes&package-manager=cargo&previous-version=1.7.1&new-version=1.8.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-23 13:53:53 +08:00
e05f387632 update to reedline commit 9cb1128 (#14146)
# Description

Update to the latest reedline commit.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-10-22 11:57:41 -05:00
9870c7c9a6 Defensive handling of errors when transposing (#14096)
# Description
This PR aims to close #14027, in which it was noticed that the transpose
command "swallows" error messages.

*Note that in exploring the linked issue, [other situations were
identified](https://github.com/nushell/nushell/issues/14027#issuecomment-2414602880)
which also produce inconsistent behaviour. These have knowingly been
omitted from this PR, to minimize its scope, and since they seem to have
a different cause. It's probably best to make a separate issue/PR in
which to tackle a broader scan of error handling, with a suspected
relation to streams.*

# User-Facing Changes
The user will see errors from deeper in the pipeline, in case the errors
originated there.

# Tests + Formatting
Toolkit PR check was run successfully.

One test was added, covering this exact situation, in order to prevent
regressions.
The bug is relatively obscure, so it may be prone to reappear during
refactorings.
2024-10-22 11:30:48 -05:00
3f75b6b371 error when closure param lists aren't terminated by | (#14095)
Fixes #13757, fixes #9562

# User-Facing Changes

- `unclosed |` is returned for malformed closure parameters:

```
{ |a }
```

- Parameter list closing pipes are highlighted as part of the closure
2024-10-22 10:40:45 -05:00
04fed82e5e Feature url build_query accepts records with lists of strings (#14073)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description

<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

Swagger supports lists (a.k.a arrays) in query parameters:

https://swagger.io/docs/specification/v3_0/serialization/
It supports three different styles:
- explode=true
- spaceDelimited
- pipeDelimited
With explode=true being the default and hence most common. It is the
hardest to use inside of nushell, as the others are just a `string join`
away. This commit adds lists with the explode=true format.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

Before:

: {a[]: [one two three], b: four} | url build-query
Error: nu:🐚:unsupported_input
× Unsupported input
╭─[entry #33:1:1]
1 │ {a[]: [one two three], b: four} | url build-query
· ───────────────┬─────────────── ───────┬───────
· │ ╰── Expected a record with string values
· ╰── value originates from here
       ╰────

After:

: {a[]: [one two three], b: four} | url build-query
    a%5B%5D=one&a%5B%5D=two&a%5B%5D=three&b=four


Despite reading CONTRIBUTING.md I didn't get approval before making the
change. My judgment is that this doesn't qualify as being "change
something significantly".

# Tests + Formatting

I added the Example instance for the automatic tests. I couldn't figure
out how to add an Example for the error case, so I did that with manual
testing. E.g.:

: {a[]: [one two [three]], b: four} | url build-query

Error: nu:🐚:unsupported_input


× Unsupported input

╭─[entry #3:1:1]

1 │ {a[]: [one two [three]], b: four} | url build-query

· ────────────────┬──────────────── ───────┬───────

· │ ╰── Expected a record with list of string values

· ╰── value originates from here

       ╰────

: {a[]: [one two 3hr], b: four} | url build-query

Error: nu:🐚:unsupported_input


× Unsupported input

╭─[entry #4:1:1]

1 │ {a[]: [one two 3hr], b: four} | url build-query

· ──────────────┬────────────── ───────┬───────

· │ ╰── Expected a record with list of string values

· ╰── value originates from here

       ╰──── 
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

I ran the four cargo commands on my local machine. I had to run the
tests with:

LANG=C and -j 1 and even then I got one failure:

thread 'commands::umkdir::mkdir_umask_permission' panicked at
crates/nu-command/tests/commands/umkdir.rs:148:9:
assertion `left == right` failed: Most *nix systems have 0o00022 as the
umask. So directory permission should be 0o40755 = 0o
40777 & (!0o00022)
left: 16893
    right: 16877

but this isn't related to this change (I seem to not be running most
*nix system; and don't have a lot of RAM for the number of cores). The
other three cargo commands didn't have errors or warnings.

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->

I will add the new example to [the
documentation](https://github.com/nushell/nushell.github.io).

# Open questions / possible future work

Things I noticed, and would like to mention and am open to adding, but
don't think I am deep enough in nushell to do them pro-actively.

## Add an argument for the other query parameter list styles

I don't know how frequent they are and I currently don't need them, so
following KISS I didn't add them.

## long input_span marked

In e.g.:

: {a[]: [one two 3hr], b: four} | url build-query

Error: nu:🐚:unsupported_input


× Unsupported input

╭─[entry #4:1:1]

1 │ {a[]: [one two 3hr], b: four} | url build-query

· ──────────────┬────────────── ───────┬───────

· │ ╰── Expected a record with list of string values

· ╰── value originates from here

       ╰──── 

the entire record is marked as input_span instead of just the "3hr" that
is causing the problem. Changing that would be trivial, but I'm not deep
enough into nushell to understand all the consequences of changing that.


## Error message says string values despite accepting numbers etc.

The error message said it only accepted strings despite accepting
numbers etc. (anything it can coerce into string). I couldn't find a
good wording myself and that was how it was before. I simply added a
"list of strings".
2024-10-22 10:38:25 -05:00
f3a1dfef95 Fix panic if tokens are placed after a redirection (#14035)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
fixes #13835

The `concat` function from `span.rs` assumes that two consecutive span
intervals must overlap. But when parsing `let` and `mut` expressions, we
call `parts_including_redirection` which chains two slices of span and
leads to the above condition not holding. So my solution here is to sort
them after chaining.


# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-10-22 10:37:03 -05:00
f738932bbd Fix range contains (#14011)
# Description

This PR changes the range contains logic to take the step into account. 

```nushell
# before
2 in 1..3.. # true

# now
2 in 1..3.. # false
```

---

I encountered another issue while adding tests. Due to floating point
precision, `2.1 in 1..1.1..3` will return `false`. The floating point
error is even bigger than `f64::EPSILON` (`0.09999999999999876` vs
`2.220446049250313e-16`). This issue disappears with bigger numbers.

I tried a different algorithm (checking if the estimated number of steps
is close enough to any integer) but the results are still pretty bad:

```rust
let n_steps = (value - self.start) / self.step; // 14.999999999999988
(n_steps - n_steps.round()).abs() < f64::EPSILON // returns false
```

Maybe it can be shipped like this, the REPL already has floating point
errors (`1.1 - 1` returns `0.10000000000000009`). Or maybe there's a way
to fix this that I didn't think of. I'm open to ideas! But in any case
performing this kind of checks on a range of floats seems more niche
than doing it on a range of ints.

# User-Facing Changes

Code that depended on this behavior to check if a number is between
`start` and `end` will potentially return a different value.

# Tests + Formatting

# After Submitting
2024-10-22 10:34:41 -05:00
4968b6b9d0 fix error when exporting consts with type signatures in modules (#14118)
Fixes #14023

# Description

- Prevents "failed to find added variable" when modules export constants
  with type signatures:

```nushell
> module foo { export const bar: int = 2 }
Error: nu::parser::unknown_state

  × Internal error.
   ╭─[entry #1:1:21]
 1 │ module foo { export const bar: int = 2 }
   ·                     ─────────┬────────
   ·                              ╰── failed to find added variable
```

- Returns `name_is_builtin_var` errors for names with type signatures:

```nushell
> let env: string = "";
Error: nu::parser::name_is_builtin_var

  × `env` used as variable name.
   ╭─[entry #1:1:5]
 1 │ let env: string = "";
   ·     ─┬─
   ·      ╰── already a builtin variable
```
2024-10-22 11:54:31 +02:00
ee97c00818 Make contributor image wider (#14138)
With this many contributors you otherwise have to scroll really far to
get down to the license info.

The alternative would be to limit the number of faces we show, but it is
cool to have them all (as long as the generated svg doesn't take too
long to load or generate)
2024-10-21 21:59:52 +02:00
1dbd431117 try and fix osc633 escaping yet again (#14140)
# Description

This PR is meant to fix the escaping in the osc633 implementation from
[PR 14008](https://github.com/nushell/nushell/pull/14008) that is
specifically for vscode. The idea is to try and follow these rules
better.
https://code.visualstudio.com/docs/terminal/shell-integration#_vs-code-custom-sequences-osc-633-st

Previously, it wouldn't escape all the characters and would only escape
characters while typing escape characters. Now it should take what was
typed and escape it if necessary.
2024-10-21 21:57:58 +02:00
09ab583f64 add start_time to ps -l on macos (#14127)
# Description

This PR adds `start_time` to the MacOS `ps -l` command. Was requested in
discord. `start_time` is displayed in `Local` time.


![image](https://github.com/user-attachments/assets/b3743cde-af43-4756-9e2a-54689104fb25)


# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->

/cc @cablehead
2024-10-21 11:55:30 -05:00
9ad6d13982 Add slice as a search term on range (#14128)
Not to be confused with `seq` which is similar to our range type,
`range` does a slice based on a range.
2024-10-21 12:55:03 +02:00
8d4426f2f8 add is_const to help commands and scope commands (#14125)
# Description

This PR adds `is_const` to `help commands` and `scope commands` so we
can see which commands are const commands.


![image](https://github.com/user-attachments/assets/f2269f9d-5042-40e4-b506-34d69096fcd1)
2024-10-21 12:54:18 +02:00
8c8f795e9e add rendered and json error messages in try/catch (#14082)
# Description

This PR adds a couple more options for dealing with try/catch errors. It
adds a `json` version of the error and a `rendered` version of the
error. It also respects the error_style configuration point.

![image](https://github.com/user-attachments/assets/32574f07-f511-40c0-8b57-de5f6f13a9c4)


# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-10-20 23:14:11 +02:00
7f2f67238f allow group-by and split-by to work with other values (#14086)
# Description

This PR updates `group-by` and `split-by` to allow other nushell Values
to be used, namely bools.

### Before
```nushell
❯ [false, false, true, false, true, false] | group-by | table -e
Error: nu:🐚:cant_convert

  × Can't convert to string.
   ╭─[entry #1:1:2]
 1 │ [false, false, true, false, true, false] | group-by | table -e
   ·  ──┬──
   ·    ╰── can't convert bool to string
   ╰────
```
### After
```nushell
❯ [false, false, true, false, true, false] | group-by | table -e
╭───────┬───────────────╮
│       │ ╭───┬───────╮ │
│ false │ │ 0 │ false │ │
│       │ │ 1 │ false │ │
│       │ │ 2 │ false │ │
│       │ │ 3 │ false │ │
│       │ ╰───┴───────╯ │
│       │ ╭───┬──────╮  │
│ true  │ │ 0 │ true │  │
│       │ │ 1 │ true │  │
│       │ ╰───┴──────╯  │
╰───────┴───────────────╯
```

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-10-20 23:14:11 +02:00
740fe942c1 Reduce duplicate dependencies on the windows crate (#14105)
Nushell currently depends on three different versions of the `windows`
crate: `0.44.0`, `0.52.0`, and `0.54.0`. This PR bumps several
dependencies so that the `nu` binary only depends on `0.56.0`.

On my machine, this PR makes `cargo build` about 10% faster.

The polars plugin still uses its own version of the `windows` crate
though, which is not ideal. We'll need to bump the `polars` crate to fix
that, but it breaks a lot of our code. (`polars 1.0` release anyone?)
2024-10-20 23:14:11 +02:00
7c5dcbb3fc Update to rust 1.80.1 (#14106)
This can be merged on 10/17 once 1.82.0 is out.

---------

Co-authored-by: Wind <WindSoilder@outlook.com>
2024-10-20 23:14:11 +02:00
7e055810b1 add like and not-like operators as synonyms for the regex operators =~ and !~ (#14072)
# Description

This PR adds `like` as a synonym for `=~` and `not-like` as a synonym
for `!~`. This is mainly a quality-of-life change to help those people
who think in sql.


![image](https://github.com/user-attachments/assets/a0b142cd-30c9-487d-b755-d6da0a0874ec)

closes #13261

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-10-20 23:12:57 +02:00
5758993e9f Add count to uniq search terms (#14108)
Adds "count" to uniq's search terms, to facilitate discovery of the
`--count` option
2024-10-20 23:12:57 +02:00
d7014e671d use command: Don't create a variable with empty record if it doesn't define any constants (#14051)
# Description
Fixes: #13967

The key changes lays in `nu-protocol/src/module.rs`, when resolving
import pattern, nushell only needs to bring `$module` with a record
value if it defines any constants.

# User-Facing Changes
```nushell
module spam {}
use spam
```
Will no longer create a `$spam` variable with an empty record.

# Tests + Formatting
Adjusted some tests and added some tests.
2024-10-20 23:12:57 +02:00
b0427ca9ff run ensure_flag_arg_type for short flag values (#14074)
Closes #13654

# User-Facing Changes

- Short flags are now fully type-checked,
  including null and record signatures for literal arguments:

```nushell
def test [-v: record<l: int>] {};
test -v null # error
test -v {l: ""} # error

def test2 [-v: int] {};
let v = ""
test2 -v $v # error
```

- `polars unpivot` `--index`/`--on` and `into value --columns`
now accept `list` values
2024-10-20 23:12:57 +02:00
3af575cce7 Make plugin list read state from plugin registry file as well (#14085)
# Description

[Context on
Discord](https://discord.com/channels/601130461678272522/855947301380947968/1292279795035668583)

**This is a breaking change, due to the removal of `is_running`.**

Some users find the `plugin list` command confusing, because it doesn't
show anything different after running `plugin add` or `plugin rm`. This
modifies the `plugin list` command to also look at the plugin registry
file to give some idea of how the plugins in engine state differ from
those in the plugin registry file.

The following values of `status` are now produced instead of
`is_running`:

- `added`: The plugin is present in the plugin registry file, but not in
the engine.
- `loaded`: The plugin is present both in the plugin registry file and
in the engine, but is not running.
- `running`: The plugin is currently running, and the `pid` column
should contain its process ID.
- `modified`: The plugin state present in the plugin registry file is
different from the state in the engine.
- `removed`: The plugin is still loaded in the engine, but is not
present in the plugin registry file.
- `invalid`: The data in the plugin registry file couldn't be
deserialized, and the plugin most likely needs to be added again.

Example (`commands` omitted):

```
╭──────┬─────────────────────┬────────────┬───────────┬──────────┬─────────────────────────────────────────────────────┬─────────╮
│    # │        name         │  version   │  status   │   pid    │                      filename                       │  shell  │
├──────┼─────────────────────┼────────────┼───────────┼──────────┼─────────────────────────────────────────────────────┼─────────┤
│    0 │ custom_values       │ 0.1.0      │ loaded    │          │ /home/devyn/.cargo/bin/nu_plugin_custom_values      │         │
│    1 │ dbus                │ 0.11.0     │ loaded    │          │ /home/devyn/.cargo/bin/nu_plugin_dbus               │         │
│    2 │ example             │ 0.98.1     │ loaded    │          │ /home/devyn/.cargo/bin/nu_plugin_example            │         │
│    3 │ explore_ir          │ 0.3.0      │ loaded    │          │ /home/devyn/.cargo/bin/nu_plugin_explore_ir         │         │
│    4 │ formats             │ 0.98.1     │ loaded    │          │ /home/devyn/.cargo/bin/nu_plugin_formats            │         │
│    5 │ gstat               │ 0.98.1     │ running   │   236662 │ /home/devyn/.cargo/bin/nu_plugin_gstat              │         │
│    6 │ inc                 │ 0.98.1     │ loaded    │          │ /home/devyn/.cargo/bin/nu_plugin_inc                │         │
│    7 │ polars              │ 0.98.1     │ added     │          │ /home/devyn/.cargo/bin/nu_plugin_polars             │         │
│    8 │ query               │ 0.98.1     │ removed   │          │ /home/devyn/.cargo/bin/nu_plugin_query              │         │
│    9 │ stress_internals    │ 0.98.1     │ loaded    │          │ /home/devyn/.cargo/bin/nu_plugin_stress_internals   │         │
╰──────┴─────────────────────┴────────────┴───────────┴──────────┴─────────────────────────────────────────────────────┴─────────╯

```

# User-Facing Changes

To `plugin list`:

* **Breaking:** The `is_running` column is removed and replaced with
`status`. Use `status == running` to filter equivalently.
* The `--plugin-config` from other plugin management commands is now
supported.
* Added an `--engine` flag which behaves more or less like before, and
doesn't load the plugin registry file at all.
* Added a `--registry` flag which only checks the plugin registry file.
All plugins appear as `added` since there is no state to compare with.

Because the default is to check both, the `plugin list` command might be
a little bit slower. If you don't need to check the plugin registry
file, the `--engine` flag does not load the plugin registry file at all,
so it should be just as fast as before.

# Tests + Formatting

Added tests for `added` and `removed` statuses. `modified` and `invalid`
are a bit more tricky so I didn't try.

# After Submitting

- [ ] update documentation that references the `plugin list` command
- [ ] release notes
2024-10-20 23:12:57 +02:00
f787d272e6 Implemented polars unnest (#14104)
# Description
Provides the ability to decomes struct columns into seperate columns for
each field:
<img width="655" alt="Screenshot 2024-10-16 at 09 57 22"
src="https://github.com/user-attachments/assets/6706bd36-8d38-4365-b58d-ba82f2d5ba9a">

# User-Facing Changes
- provides a new command `polars unnest` for decomposing struct fields
into separate columns.
2024-10-20 23:12:57 +02:00
f061c9a30e Bump to 0.99.2 (#14136) 2024-10-20 23:12:41 +02:00
509 changed files with 17171 additions and 8343 deletions

View File

@ -162,3 +162,34 @@ jobs:
else
echo "no changes in working directory";
fi
build-wasm:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4.1.7
- name: Setup Rust toolchain and cache
uses: actions-rust-lang/setup-rust-toolchain@v1.10.1
- name: Add wasm32-unknown-unknown target
run: rustup target add wasm32-unknown-unknown
- run: cargo build -p nu-cmd-base --no-default-features --target wasm32-unknown-unknown
- run: cargo build -p nu-cmd-extra --no-default-features --target wasm32-unknown-unknown
- run: cargo build -p nu-cmd-lang --no-default-features --target wasm32-unknown-unknown
- run: cargo build -p nu-color-config --no-default-features --target wasm32-unknown-unknown
- run: cargo build -p nu-command --no-default-features --target wasm32-unknown-unknown
- run: cargo build -p nu-derive-value --no-default-features --target wasm32-unknown-unknown
- run: cargo build -p nu-engine --no-default-features --target wasm32-unknown-unknown
- run: cargo build -p nu-glob --no-default-features --target wasm32-unknown-unknown
- run: cargo build -p nu-json --no-default-features --target wasm32-unknown-unknown
- run: cargo build -p nu-parser --no-default-features --target wasm32-unknown-unknown
- run: cargo build -p nu-path --no-default-features --target wasm32-unknown-unknown
- run: cargo build -p nu-pretty-hex --no-default-features --target wasm32-unknown-unknown
- run: cargo build -p nu-protocol --no-default-features --target wasm32-unknown-unknown
- run: cargo build -p nu-std --no-default-features --target wasm32-unknown-unknown
- run: cargo build -p nu-system --no-default-features --target wasm32-unknown-unknown
- run: cargo build -p nu-table --no-default-features --target wasm32-unknown-unknown
- run: cargo build -p nu-term-grid --no-default-features --target wasm32-unknown-unknown
- run: cargo build -p nu-utils --no-default-features --target wasm32-unknown-unknown
- run: cargo build -p nuon --no-default-features --target wasm32-unknown-unknown

View File

@ -170,7 +170,7 @@ jobs:
# REF: https://github.com/marketplace/actions/gh-release
# Create a release only in nushell/nightly repo
- name: Publish Archive
uses: softprops/action-gh-release@v2.0.8
uses: softprops/action-gh-release@v2.0.9
if: ${{ startsWith(github.repository, 'nushell/nightly') }}
with:
prerelease: true

View File

@ -98,9 +98,10 @@ jobs:
TARGET: ${{ matrix.target }}
_EXTRA_: ${{ matrix.extra }}
# REF: https://github.com/marketplace/actions/gh-release
# WARN: Don't upgrade this action due to the release per asset issue.
# See: https://github.com/softprops/action-gh-release/issues/445
- name: Publish Archive
uses: softprops/action-gh-release@v2.0.8
uses: softprops/action-gh-release@v2.0.5
if: ${{ startsWith(github.ref, 'refs/tags/') }}
with:
draft: true
@ -124,7 +125,7 @@ jobs:
- name: Create Checksums
run: cd release && shasum -a 256 * > ../SHA256SUMS
- name: Publish Checksums
uses: softprops/action-gh-release@v2.0.8
uses: softprops/action-gh-release@v2.0.5
with:
draft: true
files: SHA256SUMS

View File

@ -10,4 +10,4 @@ jobs:
uses: actions/checkout@v4.1.7
- name: Check spelling
uses: crate-ci/typos@v1.26.0
uses: crate-ci/typos@v1.28.4

2931
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -10,8 +10,8 @@ homepage = "https://www.nushell.sh"
license = "MIT"
name = "nu"
repository = "https://github.com/nushell/nushell"
rust-version = "1.79.0"
version = "0.99.1"
rust-version = "1.81.0"
version = "0.101.0"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
@ -66,16 +66,16 @@ alphanumeric-sort = "1.5"
ansi-str = "0.8"
anyhow = "1.0.82"
base64 = "0.22.1"
bracoxide = "0.1.2"
brotli = "5.0"
bracoxide = "0.1.4"
brotli = "6.0"
byteorder = "1.5"
bytes = "1"
bytesize = "1.3"
calamine = "0.24.0"
calamine = "0.26.1"
chardetng = "0.1.17"
chrono = { default-features = false, version = "0.4.34" }
chrono-humanize = "0.2.3"
chrono-tz = "0.8"
chrono-tz = "0.10"
crossbeam-channel = "0.5.8"
crossterm = "0.28.1"
csv = "1.3"
@ -86,13 +86,13 @@ dirs = "5.0"
dirs-sys = "0.4"
dtparse = "2.0"
encoding_rs = "0.8"
fancy-regex = "0.13"
fancy-regex = "0.14"
filesize = "0.2"
filetime = "0.2"
fuzzy-matcher = "0.3"
heck = "0.5.0"
human-date-parser = "0.2.0"
indexmap = "2.6"
indexmap = "2.7"
indicatif = "0.17"
interprocess = "2.2.0"
is_executable = "1.0"
@ -103,22 +103,22 @@ log = "0.4"
lru = "0.12"
lscolors = { version = "0.17", default-features = false }
lsp-server = "0.7.5"
lsp-types = "0.95.0"
lsp-types = { version = "0.95.0", features = ["proposed"] }
mach2 = "0.4"
md5 = { version = "0.10", package = "md-5" }
miette = "7.2"
miette = "7.3"
mime = "0.3.17"
mime_guess = "2.0"
mockito = { version = "1.5", default-features = false }
multipart-rs = "0.1.11"
mockito = { version = "1.6", default-features = false }
multipart-rs = "0.1.13"
native-tls = "0.2"
nix = { version = "0.29", default-features = false }
notify-debouncer-full = { version = "0.3", default-features = false }
nu-ansi-term = "0.50.1"
num-format = "0.4"
num-traits = "0.2"
oem_cp = "2.0.0"
omnipath = "0.1"
once_cell = "1.20"
open = "5.3"
os_pipe = { version = "1.2", features = ["io_safety"] }
pathdiff = "0.2"
@ -127,25 +127,27 @@ pretty_assertions = "1.4"
print-positions = "0.6"
proc-macro-error = { version = "1.0", default-features = false }
proc-macro2 = "1.0"
procfs = "0.16.0"
procfs = "0.17.0"
pwd = "1.3"
quick-xml = "0.32.0"
quick-xml = "0.37.0"
quickcheck = "1.0"
quickcheck_macros = "1.0"
quote = "1.0"
rand = "0.8"
getrandom = "0.2" # pick same version that rand requires
rand_chacha = "0.3.1"
ratatui = "0.26"
rayon = "1.10"
reedline = "0.36.0"
reedline = "0.38.0"
regex = "1.9.5"
rmp = "0.8"
rmp-serde = "1.3"
ropey = "1.6.1"
roxmltree = "0.19"
rstest = { version = "0.18", default-features = false }
roxmltree = "0.20"
rstest = { version = "0.23", default-features = false }
rusqlite = "0.31"
rust-embed = "8.5.0"
scopeguard = { version = "1.2.0" }
serde = { version = "1.0" }
serde_json = "1.0"
serde_urlencoded = "0.7.1"
@ -153,30 +155,31 @@ serde_yaml = "0.9"
sha2 = "0.10"
strip-ansi-escapes = "0.2.0"
syn = "2.0"
sysinfo = "0.30"
sysinfo = "0.32"
tabled = { version = "0.16.0", default-features = false }
tempfile = "3.13"
terminal_size = "0.3"
titlecase = "2.0"
tempfile = "3.14"
terminal_size = "0.4"
titlecase = "3.0"
toml = "0.8"
trash = "3.3"
trash = "5.2"
umask = "2.1"
unicode-segmentation = "1.12"
unicode-width = "0.1"
ureq = { version = "2.10", default-features = false }
unicode-width = "0.2"
ureq = { version = "2.12", default-features = false }
url = "2.2"
uu_cp = "0.0.27"
uu_mkdir = "0.0.27"
uu_mktemp = "0.0.27"
uu_mv = "0.0.27"
uu_whoami = "0.0.27"
uu_uname = "0.0.27"
uucore = "0.0.27"
uuid = "1.10.0"
uu_cp = "0.0.28"
uu_mkdir = "0.0.28"
uu_mktemp = "0.0.28"
uu_mv = "0.0.28"
uu_touch = "0.0.28"
uu_whoami = "0.0.28"
uu_uname = "0.0.28"
uucore = "0.0.28"
uuid = "1.11.0"
v_htmlescape = "0.15.0"
wax = "0.6"
which = "6.0.0"
windows = "0.54"
which = "7.0.0"
windows = "0.56"
windows-sys = "0.48"
winreg = "0.52"
@ -189,22 +192,22 @@ unchecked_duration_subtraction = "warn"
workspace = true
[dependencies]
nu-cli = { path = "./crates/nu-cli", version = "0.99.1" }
nu-cmd-base = { path = "./crates/nu-cmd-base", version = "0.99.1" }
nu-cmd-lang = { path = "./crates/nu-cmd-lang", version = "0.99.1" }
nu-cmd-plugin = { path = "./crates/nu-cmd-plugin", version = "0.99.1", optional = true }
nu-cmd-extra = { path = "./crates/nu-cmd-extra", version = "0.99.1" }
nu-command = { path = "./crates/nu-command", version = "0.99.1" }
nu-engine = { path = "./crates/nu-engine", version = "0.99.1" }
nu-explore = { path = "./crates/nu-explore", version = "0.99.1" }
nu-lsp = { path = "./crates/nu-lsp/", version = "0.99.1" }
nu-parser = { path = "./crates/nu-parser", version = "0.99.1" }
nu-path = { path = "./crates/nu-path", version = "0.99.1" }
nu-plugin-engine = { path = "./crates/nu-plugin-engine", optional = true, version = "0.99.1" }
nu-protocol = { path = "./crates/nu-protocol", version = "0.99.1" }
nu-std = { path = "./crates/nu-std", version = "0.99.1" }
nu-system = { path = "./crates/nu-system", version = "0.99.1" }
nu-utils = { path = "./crates/nu-utils", version = "0.99.1" }
nu-cli = { path = "./crates/nu-cli", version = "0.101.0" }
nu-cmd-base = { path = "./crates/nu-cmd-base", version = "0.101.0" }
nu-cmd-lang = { path = "./crates/nu-cmd-lang", version = "0.101.0" }
nu-cmd-plugin = { path = "./crates/nu-cmd-plugin", version = "0.101.0", optional = true }
nu-cmd-extra = { path = "./crates/nu-cmd-extra", version = "0.101.0" }
nu-command = { path = "./crates/nu-command", version = "0.101.0" }
nu-engine = { path = "./crates/nu-engine", version = "0.101.0" }
nu-explore = { path = "./crates/nu-explore", version = "0.101.0" }
nu-lsp = { path = "./crates/nu-lsp/", version = "0.101.0" }
nu-parser = { path = "./crates/nu-parser", version = "0.101.0" }
nu-path = { path = "./crates/nu-path", version = "0.101.0" }
nu-plugin-engine = { path = "./crates/nu-plugin-engine", optional = true, version = "0.101.0" }
nu-protocol = { path = "./crates/nu-protocol", version = "0.101.0" }
nu-std = { path = "./crates/nu-std", version = "0.101.0" }
nu-system = { path = "./crates/nu-system", version = "0.101.0" }
nu-utils = { path = "./crates/nu-utils", version = "0.101.0" }
reedline = { workspace = true, features = ["bashisms", "sqlite"] }
crossterm = { workspace = true }
@ -234,27 +237,32 @@ nix = { workspace = true, default-features = false, features = [
] }
[dev-dependencies]
nu-test-support = { path = "./crates/nu-test-support", version = "0.99.1" }
nu-plugin-protocol = { path = "./crates/nu-plugin-protocol", version = "0.99.1" }
nu-plugin-core = { path = "./crates/nu-plugin-core", version = "0.99.1" }
nu-test-support = { path = "./crates/nu-test-support", version = "0.101.0" }
nu-plugin-protocol = { path = "./crates/nu-plugin-protocol", version = "0.101.0" }
nu-plugin-core = { path = "./crates/nu-plugin-core", version = "0.101.0" }
assert_cmd = "2.0"
dirs = { workspace = true }
tango-bench = "0.6"
pretty_assertions = { workspace = true }
regex = { workspace = true }
rstest = { workspace = true, default-features = false }
serial_test = "3.1"
serial_test = "3.2"
tempfile = { workspace = true }
[features]
plugin = [
"nu-plugin-engine",
# crates
"nu-cmd-plugin",
"nu-plugin-engine",
# features
"nu-cli/plugin",
"nu-parser/plugin",
"nu-cmd-lang/plugin",
"nu-command/plugin",
"nu-protocol/plugin",
"nu-engine/plugin",
"nu-engine/plugin",
"nu-parser/plugin",
"nu-protocol/plugin",
]
default = [

View File

@ -58,7 +58,7 @@ For details about which platforms the Nushell team actively supports, see [our p
## Configuration
The default configurations can be found at [sample_config](crates/nu-utils/src/sample_config)
The default configurations can be found at [sample_config](crates/nu-utils/src/default_files)
which are the configuration files one gets when they startup Nushell for the first time.
It sets all of the default configuration to run Nushell. From here one can
@ -229,7 +229,7 @@ Please submit an issue or PR to be added to this list.
See [Contributing](CONTRIBUTING.md) for details. Thanks to all the people who already contributed!
<a href="https://github.com/nushell/nushell/graphs/contributors">
<img src="https://contributors-img.web.app/image?repo=nushell/nushell&max=750" />
<img src="https://contributors-img.web.app/image?repo=nushell/nushell&max=750&columns=20" />
</a>
## License

View File

@ -46,9 +46,6 @@ fn setup_stack_and_engine_from_command(command: &str) -> (Stack, EngineState) {
let mut stack = Stack::new();
// Support running benchmarks without IR mode
stack.use_ir = std::env::var_os("NU_DISABLE_IR").is_none();
evaluate_commands(
&commands,
&mut engine,

View File

@ -5,27 +5,27 @@ repository = "https://github.com/nushell/nushell/tree/main/crates/nu-cli"
edition = "2021"
license = "MIT"
name = "nu-cli"
version = "0.99.1"
version = "0.101.0"
[lib]
bench = false
[dev-dependencies]
nu-cmd-lang = { path = "../nu-cmd-lang", version = "0.99.1" }
nu-command = { path = "../nu-command", version = "0.99.1" }
nu-test-support = { path = "../nu-test-support", version = "0.99.1" }
nu-cmd-lang = { path = "../nu-cmd-lang", version = "0.101.0" }
nu-command = { path = "../nu-command", version = "0.101.0" }
nu-test-support = { path = "../nu-test-support", version = "0.101.0" }
rstest = { workspace = true, default-features = false }
tempfile = { workspace = true }
[dependencies]
nu-cmd-base = { path = "../nu-cmd-base", version = "0.99.1" }
nu-engine = { path = "../nu-engine", version = "0.99.1" }
nu-path = { path = "../nu-path", version = "0.99.1" }
nu-parser = { path = "../nu-parser", version = "0.99.1" }
nu-plugin-engine = { path = "../nu-plugin-engine", version = "0.99.1", optional = true }
nu-protocol = { path = "../nu-protocol", version = "0.99.1" }
nu-utils = { path = "../nu-utils", version = "0.99.1" }
nu-color-config = { path = "../nu-color-config", version = "0.99.1" }
nu-cmd-base = { path = "../nu-cmd-base", version = "0.101.0" }
nu-engine = { path = "../nu-engine", version = "0.101.0", features = ["os"] }
nu-path = { path = "../nu-path", version = "0.101.0" }
nu-parser = { path = "../nu-parser", version = "0.101.0" }
nu-plugin-engine = { path = "../nu-plugin-engine", version = "0.101.0", optional = true }
nu-protocol = { path = "../nu-protocol", version = "0.101.0", features = ["os"] }
nu-utils = { path = "../nu-utils", version = "0.101.0" }
nu-color-config = { path = "../nu-color-config", version = "0.101.0" }
nu-ansi-term = { workspace = true }
reedline = { workspace = true, features = ["bashisms", "sqlite"] }
@ -37,7 +37,6 @@ is_executable = { workspace = true }
log = { workspace = true }
miette = { workspace = true, features = ["fancy-no-backtrace"] }
lscolors = { workspace = true, default-features = false, features = ["nu-ansi-term"] }
once_cell = { workspace = true }
percent-encoding = { workspace = true }
sysinfo = { workspace = true }
unicode-segmentation = { workspace = true }

View File

@ -17,6 +17,7 @@ pub fn add_cli_context(mut engine_state: EngineState) -> EngineState {
CommandlineGetCursor,
CommandlineSetCursor,
History,
HistoryImport,
HistorySession,
Keybindings,
KeybindingsDefault,

View File

@ -0,0 +1,9 @@
// Each const is named after a HistoryItem field, and the value is the field name to be displayed to
// the user (or accept during import).
pub const COMMAND_LINE: &str = "command";
pub const START_TIMESTAMP: &str = "start_timestamp";
pub const HOSTNAME: &str = "hostname";
pub const CWD: &str = "cwd";
pub const EXIT_STATUS: &str = "exit_status";
pub const DURATION: &str = "duration";
pub const SESSION_ID: &str = "session_id";

View File

@ -5,6 +5,8 @@ use reedline::{
SqliteBackedHistory,
};
use super::fields;
#[derive(Clone)]
pub struct History;
@ -83,7 +85,8 @@ impl Command for History {
entries.into_iter().enumerate().map(move |(idx, entry)| {
Value::record(
record! {
"command" => Value::string(entry.command_line, head),
fields::COMMAND_LINE => Value::string(entry.command_line, head),
// TODO: This name is inconsistent with create_history_record.
"index" => Value::int(idx as i64, head),
},
head,
@ -176,13 +179,13 @@ fn create_history_record(idx: usize, entry: HistoryItem, long: bool, head: Span)
Value::record(
record! {
"item_id" => item_id_value,
"start_timestamp" => start_timestamp_value,
"command" => command_value,
"session_id" => session_id_value,
"hostname" => hostname_value,
"cwd" => cwd_value,
"duration" => duration_value,
"exit_status" => exit_status_value,
fields::START_TIMESTAMP => start_timestamp_value,
fields::COMMAND_LINE => command_value,
fields::SESSION_ID => session_id_value,
fields::HOSTNAME => hostname_value,
fields::CWD => cwd_value,
fields::DURATION => duration_value,
fields::EXIT_STATUS => exit_status_value,
"idx" => index_value,
},
head,
@ -190,11 +193,11 @@ fn create_history_record(idx: usize, entry: HistoryItem, long: bool, head: Span)
} else {
Value::record(
record! {
"start_timestamp" => start_timestamp_value,
"command" => command_value,
"cwd" => cwd_value,
"duration" => duration_value,
"exit_status" => exit_status_value,
fields::START_TIMESTAMP => start_timestamp_value,
fields::COMMAND_LINE => command_value,
fields::CWD => cwd_value,
fields::DURATION => duration_value,
fields::EXIT_STATUS => exit_status_value,
},
head,
)

View File

@ -0,0 +1,415 @@
use std::path::{Path, PathBuf};
use nu_engine::command_prelude::*;
use nu_protocol::HistoryFileFormat;
use reedline::{
FileBackedHistory, History, HistoryItem, ReedlineError, SearchQuery, SqliteBackedHistory,
};
use super::fields;
#[derive(Clone)]
pub struct HistoryImport;
impl Command for HistoryImport {
fn name(&self) -> &str {
"history import"
}
fn description(&self) -> &str {
"Import command line history"
}
fn extra_description(&self) -> &str {
r#"Can import history from input, either successive command lines or more detailed records. If providing records, available fields are:
command_line, id, start_timestamp, hostname, cwd, duration, exit_status.
If no input is provided, will import all history items from existing history in the other format: if current history is stored in sqlite, it will store it in plain text and vice versa.
Note that history item IDs are ignored when importing from file."#
}
fn signature(&self) -> nu_protocol::Signature {
Signature::build("history import")
.category(Category::History)
.input_output_types(vec![
(Type::Nothing, Type::Nothing),
(Type::List(Box::new(Type::String)), Type::Nothing),
(Type::table(), Type::Nothing),
])
}
fn examples(&self) -> Vec<Example> {
vec![
Example {
example: "history import",
description:
"Append all items from history in the other format to the current history",
result: None,
},
Example {
example: "echo foo | history import",
description: "Append `foo` to the current history",
result: None,
},
Example {
example: "[[ command_line cwd ]; [ foo /home ]] | history import",
description: "Append `foo` ran from `/home` to the current history",
result: None,
},
]
}
fn run(
&self,
engine_state: &EngineState,
_stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let ok = Ok(Value::nothing(call.head).into_pipeline_data());
let Some(history) = engine_state.history_config() else {
return ok;
};
let Some(current_history_path) = history.file_path() else {
return Err(ShellError::ConfigDirNotFound {
span: Some(call.head),
});
};
if let Some(bak_path) = backup(&current_history_path)? {
println!("Backed history to {}", bak_path.display());
}
match input {
PipelineData::Empty => {
let other_format = match history.file_format {
HistoryFileFormat::Sqlite => HistoryFileFormat::Plaintext,
HistoryFileFormat::Plaintext => HistoryFileFormat::Sqlite,
};
let src = new_backend(other_format, None)?;
let mut dst = new_backend(history.file_format, Some(current_history_path))?;
let items = src
.search(SearchQuery::everything(
reedline::SearchDirection::Forward,
None,
))
.map_err(error_from_reedline)?
.into_iter()
.map(Ok);
import(dst.as_mut(), items)
}
_ => {
let input = input.into_iter().map(item_from_value);
import(
new_backend(history.file_format, Some(current_history_path))?.as_mut(),
input,
)
}
}?;
ok
}
}
fn new_backend(
format: HistoryFileFormat,
path: Option<PathBuf>,
) -> Result<Box<dyn History>, ShellError> {
let path = match path {
Some(path) => path,
None => {
let Some(mut path) = nu_path::nu_config_dir() else {
return Err(ShellError::ConfigDirNotFound { span: None });
};
path.push(format.default_file_name());
path.into_std_path_buf()
}
};
fn map(
result: Result<impl History + 'static, ReedlineError>,
) -> Result<Box<dyn History>, ShellError> {
result
.map(|x| Box::new(x) as Box<dyn History>)
.map_err(error_from_reedline)
}
match format {
// Use a reasonably large value for maximum capacity.
HistoryFileFormat::Plaintext => map(FileBackedHistory::with_file(0xfffffff, path)),
HistoryFileFormat::Sqlite => map(SqliteBackedHistory::with_file(path, None, None)),
}
}
fn import(
dst: &mut dyn History,
src: impl Iterator<Item = Result<HistoryItem, ShellError>>,
) -> Result<(), ShellError> {
for item in src {
let mut item = item?;
item.id = None;
dst.save(item).map_err(error_from_reedline)?;
}
Ok(())
}
fn error_from_reedline(e: ReedlineError) -> ShellError {
// TODO: Should we add a new ShellError variant?
ShellError::GenericError {
error: "Reedline error".to_owned(),
msg: format!("{e}"),
span: None,
help: None,
inner: Vec::new(),
}
}
fn item_from_value(v: Value) -> Result<HistoryItem, ShellError> {
let span = v.span();
match v {
Value::Record { val, .. } => item_from_record(val.into_owned(), span),
Value::String { val, .. } => Ok(HistoryItem {
command_line: val,
id: None,
start_timestamp: None,
session_id: None,
hostname: None,
cwd: None,
duration: None,
exit_status: None,
more_info: None,
}),
_ => Err(ShellError::UnsupportedInput {
msg: "Only list and record inputs are supported".to_owned(),
input: v.get_type().to_string(),
msg_span: span,
input_span: span,
}),
}
}
fn item_from_record(mut rec: Record, span: Span) -> Result<HistoryItem, ShellError> {
let cmd = match rec.remove(fields::COMMAND_LINE) {
Some(v) => v.as_str()?.to_owned(),
None => {
return Err(ShellError::TypeMismatch {
err_message: format!("missing column: {}", fields::COMMAND_LINE),
span,
})
}
};
fn get<T>(
rec: &mut Record,
field: &'static str,
f: impl FnOnce(Value) -> Result<T, ShellError>,
) -> Result<Option<T>, ShellError> {
rec.remove(field).map(f).transpose()
}
let rec = &mut rec;
let item = HistoryItem {
command_line: cmd,
id: None,
start_timestamp: get(rec, fields::START_TIMESTAMP, |v| Ok(v.as_date()?.to_utc()))?,
hostname: get(rec, fields::HOSTNAME, |v| Ok(v.as_str()?.to_owned()))?,
cwd: get(rec, fields::CWD, |v| Ok(v.as_str()?.to_owned()))?,
exit_status: get(rec, fields::EXIT_STATUS, |v| v.as_int())?,
duration: get(rec, fields::DURATION, duration_from_value)?,
more_info: None,
// TODO: Currently reedline doesn't let you create session IDs.
session_id: None,
};
if !rec.is_empty() {
let cols = rec.columns().map(|s| s.as_str()).collect::<Vec<_>>();
return Err(ShellError::TypeMismatch {
err_message: format!("unsupported column names: {}", cols.join(", ")),
span,
});
}
Ok(item)
}
fn duration_from_value(v: Value) -> Result<std::time::Duration, ShellError> {
chrono::Duration::nanoseconds(v.as_duration()?)
.to_std()
.map_err(|_| ShellError::IOError {
msg: "negative duration not supported".to_string(),
})
}
fn find_backup_path(path: &Path) -> Result<PathBuf, ShellError> {
let Ok(mut bak_path) = path.to_path_buf().into_os_string().into_string() else {
// This isn't fundamentally problem, but trying to work with OsString is a nightmare.
return Err(ShellError::IOError {
msg: "History path mush be representable as UTF-8".to_string(),
});
};
bak_path.push_str(".bak");
if !Path::new(&bak_path).exists() {
return Ok(bak_path.into());
}
let base_len = bak_path.len();
for i in 1..100 {
use std::fmt::Write;
bak_path.truncate(base_len);
write!(&mut bak_path, ".{i}").unwrap();
if !Path::new(&bak_path).exists() {
return Ok(PathBuf::from(bak_path));
}
}
Err(ShellError::IOError {
msg: "Too many existing backup files".to_string(),
})
}
fn backup(path: &Path) -> Result<Option<PathBuf>, ShellError> {
match path.metadata() {
Ok(md) if md.is_file() => (),
Ok(_) => {
return Err(ShellError::IOError {
msg: "history path exists but is not a file".to_string(),
})
}
Err(e) if e.kind() == std::io::ErrorKind::NotFound => return Ok(None),
Err(e) => return Err(e.into()),
}
let bak_path = find_backup_path(path)?;
std::fs::copy(path, &bak_path)?;
Ok(Some(bak_path))
}
#[cfg(test)]
mod tests {
use chrono::DateTime;
use rstest::rstest;
use super::*;
#[test]
fn test_item_from_value_string() -> Result<(), ShellError> {
let item = item_from_value(Value::string("foo", Span::unknown()))?;
assert_eq!(
item,
HistoryItem {
command_line: "foo".to_string(),
id: None,
start_timestamp: None,
session_id: None,
hostname: None,
cwd: None,
duration: None,
exit_status: None,
more_info: None
}
);
Ok(())
}
#[test]
fn test_item_from_value_record() {
let span = Span::unknown();
let rec = new_record(&[
("command", Value::string("foo", span)),
(
"start_timestamp",
Value::date(
DateTime::parse_from_rfc3339("1996-12-19T16:39:57-08:00").unwrap(),
span,
),
),
("hostname", Value::string("localhost", span)),
("cwd", Value::string("/home/test", span)),
("duration", Value::duration(100_000_000, span)),
("exit_status", Value::int(42, span)),
]);
let item = item_from_value(rec).unwrap();
assert_eq!(
item,
HistoryItem {
command_line: "foo".to_string(),
id: None,
start_timestamp: Some(
DateTime::parse_from_rfc3339("1996-12-19T16:39:57-08:00")
.unwrap()
.to_utc()
),
hostname: Some("localhost".to_string()),
cwd: Some("/home/test".to_string()),
duration: Some(std::time::Duration::from_nanos(100_000_000)),
exit_status: Some(42),
session_id: None,
more_info: None
}
);
}
#[test]
fn test_item_from_value_record_extra_field() {
let span = Span::unknown();
let rec = new_record(&[
("command_line", Value::string("foo", span)),
("id_nonexistent", Value::int(1, span)),
]);
assert!(item_from_value(rec).is_err());
}
#[test]
fn test_item_from_value_record_bad_type() {
let span = Span::unknown();
let rec = new_record(&[
("command_line", Value::string("foo", span)),
("id", Value::string("one".to_string(), span)),
]);
assert!(item_from_value(rec).is_err());
}
fn new_record(rec: &[(&'static str, Value)]) -> Value {
let span = Span::unknown();
let rec = Record::from_raw_cols_vals(
rec.iter().map(|(col, _)| col.to_string()).collect(),
rec.iter().map(|(_, val)| val.clone()).collect(),
span,
span,
)
.unwrap();
Value::record(rec, span)
}
#[rstest]
#[case::no_backup(&["history.dat"], "history.dat.bak")]
#[case::backup_exists(&["history.dat", "history.dat.bak"], "history.dat.bak.1")]
#[case::multiple_backups_exists( &["history.dat", "history.dat.bak", "history.dat.bak.1"], "history.dat.bak.2")]
fn test_find_backup_path(#[case] existing: &[&str], #[case] want: &str) {
let dir = tempfile::tempdir().unwrap();
for name in existing {
std::fs::File::create_new(dir.path().join(name)).unwrap();
}
let got = find_backup_path(&dir.path().join("history.dat")).unwrap();
assert_eq!(got, dir.path().join(want))
}
#[test]
fn test_backup() {
let dir = tempfile::tempdir().unwrap();
let mut history = std::fs::File::create_new(dir.path().join("history.dat")).unwrap();
use std::io::Write;
write!(&mut history, "123").unwrap();
let want_bak_path = dir.path().join("history.dat.bak");
assert_eq!(
backup(&dir.path().join("history.dat")),
Ok(Some(want_bak_path.clone()))
);
let got_data = String::from_utf8(std::fs::read(want_bak_path).unwrap()).unwrap();
assert_eq!(got_data, "123");
}
#[test]
fn test_backup_no_file() {
let dir = tempfile::tempdir().unwrap();
let bak_path = backup(&dir.path().join("history.dat")).unwrap();
assert!(bak_path.is_none());
}
}

View File

@ -1,5 +1,8 @@
mod fields;
mod history_;
mod history_import;
mod history_session;
pub use history_::History;
pub use history_import::HistoryImport;
pub use history_session::HistorySession;

View File

@ -7,7 +7,7 @@ mod keybindings_list;
mod keybindings_listen;
pub use commandline::{Commandline, CommandlineEdit, CommandlineGetCursor, CommandlineSetCursor};
pub use history::{History, HistorySession};
pub use history::{History, HistoryImport, HistorySession};
pub use keybindings::Keybindings;
pub use keybindings_default::KeybindingsDefault;
pub use keybindings_list::KeybindingsList;

View File

@ -1,5 +1,7 @@
use std::collections::HashMap;
use crate::{
completions::{Completer, CompletionOptions, MatchAlgorithm},
completions::{Completer, CompletionOptions},
SuggestionKind,
};
use nu_parser::FlatShape;
@ -9,7 +11,7 @@ use nu_protocol::{
};
use reedline::Suggestion;
use super::{completion_common::sort_suggestions, SemanticSuggestion};
use super::{completion_options::NuMatcher, SemanticSuggestion};
pub struct CommandCompletion {
flattened: Vec<(Span, FlatShape)>,
@ -33,13 +35,13 @@ impl CommandCompletion {
fn external_command_completion(
&self,
working_set: &StateWorkingSet,
prefix: &str,
match_algorithm: MatchAlgorithm,
) -> Vec<String> {
let mut executables = vec![];
sugg_span: reedline::Span,
matched_internal: impl Fn(&str) -> bool,
matcher: &mut NuMatcher<String>,
) -> HashMap<String, SemanticSuggestion> {
let mut suggs = HashMap::new();
// os agnostic way to get the PATH env var
let paths = working_set.permanent_state.get_path_env_var();
let paths = working_set.permanent_state.get_env_var_insensitive("path");
if let Some(paths) = paths {
if let Ok(paths) = paths.as_list() {
@ -54,24 +56,38 @@ impl CommandCompletion {
.completions
.external
.max_results
> executables.len() as i64
&& !executables.contains(
&item
.path()
.file_name()
.map(|x| x.to_string_lossy().to_string())
.unwrap_or_default(),
)
&& matches!(
item.path().file_name().map(|x| match_algorithm
.matches_str(&x.to_string_lossy(), prefix)),
Some(true)
)
&& is_executable::is_executable(item.path())
<= suggs.len() as i64
{
if let Ok(name) = item.file_name().into_string() {
executables.push(name);
}
break;
}
let Ok(name) = item.file_name().into_string() else {
continue;
};
let value = if matched_internal(&name) {
format!("^{}", name)
} else {
name.clone()
};
if suggs.contains_key(&value) {
continue;
}
if matcher.matches(&name) && is_executable::is_executable(item.path()) {
// If there's an internal command with the same name, adds ^cmd to the
// matcher so that both the internal and external command are included
matcher.add(&name, value.clone());
suggs.insert(
value.clone(),
SemanticSuggestion {
suggestion: Suggestion {
value,
span: sugg_span,
append_whitespace: true,
..Default::default()
},
// TODO: is there a way to create a test?
kind: None,
},
);
}
}
}
@ -79,7 +95,7 @@ impl CommandCompletion {
}
}
executables
suggs
}
fn complete_commands(
@ -88,68 +104,59 @@ impl CommandCompletion {
span: Span,
offset: usize,
find_externals: bool,
match_algorithm: MatchAlgorithm,
options: &CompletionOptions,
) -> Vec<SemanticSuggestion> {
let partial = working_set.get_span_contents(span);
let mut matcher = NuMatcher::new(String::from_utf8_lossy(partial), options.clone());
let filter_predicate = |command: &[u8]| match_algorithm.matches_u8(command, partial);
let sugg_span = reedline::Span::new(span.start - offset, span.end - offset);
let mut results = working_set
.find_commands_by_predicate(filter_predicate, true)
.into_iter()
.map(move |x| SemanticSuggestion {
suggestion: Suggestion {
value: String::from_utf8_lossy(&x.0).to_string(),
description: x.1,
span: reedline::Span::new(span.start - offset, span.end - offset),
append_whitespace: true,
..Suggestion::default()
},
kind: Some(SuggestionKind::Command(x.2)),
})
.collect::<Vec<_>>();
let partial = working_set.get_span_contents(span);
let partial = String::from_utf8_lossy(partial).to_string();
if find_externals {
let results_external = self
.external_command_completion(working_set, &partial, match_algorithm)
.into_iter()
.map(move |x| SemanticSuggestion {
let mut internal_suggs = HashMap::new();
let filtered_commands = working_set.find_commands_by_predicate(
|name| {
let name = String::from_utf8_lossy(name);
matcher.add(&name, name.to_string())
},
true,
);
for (name, description, typ) in filtered_commands {
let name = String::from_utf8_lossy(&name);
internal_suggs.insert(
name.to_string(),
SemanticSuggestion {
suggestion: Suggestion {
value: x,
span: reedline::Span::new(span.start - offset, span.end - offset),
value: name.to_string(),
description,
span: sugg_span,
append_whitespace: true,
..Suggestion::default()
},
// TODO: is there a way to create a test?
kind: None,
});
let results_strings: Vec<String> =
results.iter().map(|x| x.suggestion.value.clone()).collect();
for external in results_external {
if results_strings.contains(&external.suggestion.value) {
results.push(SemanticSuggestion {
suggestion: Suggestion {
value: format!("^{}", external.suggestion.value),
span: external.suggestion.span,
append_whitespace: true,
..Suggestion::default()
},
kind: external.kind,
})
} else {
results.push(external)
}
}
results
} else {
results
kind: Some(SuggestionKind::Command(typ)),
},
);
}
let mut external_suggs = if find_externals {
self.external_command_completion(
working_set,
sugg_span,
|name| internal_suggs.contains_key(name),
&mut matcher,
)
} else {
HashMap::new()
};
let mut res = Vec::new();
for cmd_name in matcher.results() {
if let Some(sugg) = internal_suggs
.remove(&cmd_name)
.or_else(|| external_suggs.remove(&cmd_name))
{
res.push(sugg);
}
}
res
}
}
@ -158,7 +165,7 @@ impl Completer for CommandCompletion {
&mut self,
working_set: &StateWorkingSet,
_stack: &Stack,
prefix: &[u8],
_prefix: &[u8],
span: Span,
offset: usize,
pos: usize,
@ -188,18 +195,18 @@ impl Completer for CommandCompletion {
Span::new(last.0.start, pos),
offset,
false,
options.match_algorithm,
options,
)
} else {
vec![]
};
if !subcommands.is_empty() {
return sort_suggestions(&String::from_utf8_lossy(prefix), subcommands, options);
return subcommands;
}
let config = working_set.get_config();
let commands = if matches!(self.flat_shape, nu_parser::FlatShape::External)
if matches!(self.flat_shape, nu_parser::FlatShape::External)
|| matches!(self.flat_shape, nu_parser::FlatShape::InternalCall(_))
|| ((span.end - span.start) == 0)
|| is_passthrough_command(working_set.delta.get_file_contents())
@ -214,13 +221,11 @@ impl Completer for CommandCompletion {
span,
offset,
config.completions.external.enable,
options.match_algorithm,
options,
)
} else {
vec![]
};
sort_suggestions(&String::from_utf8_lossy(prefix), commands, options)
}
}
}

View File

@ -297,7 +297,7 @@ impl NuCompleter {
let mut completer =
OperatorCompletion::new(pipeline_element.expr.clone());
return self.process_completion(
let operator_suggestion = self.process_completion(
&mut completer,
&working_set,
prefix,
@ -305,6 +305,9 @@ impl NuCompleter {
fake_offset,
pos,
);
if !operator_suggestion.is_empty() {
return operator_suggestion;
}
}
}
}

View File

@ -1,22 +1,20 @@
use super::MatchAlgorithm;
use crate::{
completions::{matches, CompletionOptions},
SemanticSuggestion,
};
use fuzzy_matcher::{skim::SkimMatcherV2, FuzzyMatcher};
use super::{completion_options::NuMatcher, MatchAlgorithm};
use crate::completions::CompletionOptions;
use nu_ansi_term::Style;
use nu_engine::env_to_string;
use nu_path::dots::expand_ndots;
use nu_path::{expand_to_real_path, home_dir};
use nu_protocol::{
engine::{EngineState, Stack, StateWorkingSet},
CompletionSort, Span,
Span,
};
use nu_utils::get_ls_colors;
use nu_utils::IgnoreCaseExt;
use std::path::{is_separator, Component, Path, PathBuf, MAIN_SEPARATOR as SEP};
#[derive(Clone, Default)]
pub struct PathBuiltFromString {
cwd: PathBuf,
parts: Vec<String>,
isdir: bool,
}
@ -30,76 +28,84 @@ pub struct PathBuiltFromString {
/// want_directory: Whether we want only directories as completion matches.
/// Some commands like `cd` can only be run on directories whereas others
/// like `ls` can be run on regular files as well.
pub fn complete_rec(
fn complete_rec(
partial: &[&str],
built: &PathBuiltFromString,
cwd: &Path,
built_paths: &[PathBuiltFromString],
options: &CompletionOptions,
want_directory: bool,
isdir: bool,
) -> Vec<PathBuiltFromString> {
let mut completions = vec![];
if let Some((&base, rest)) = partial.split_first() {
if base.chars().all(|c| c == '.') && (isdir || !rest.is_empty()) {
let mut built = built.clone();
built.parts.push(base.to_string());
built.isdir = true;
return complete_rec(rest, &built, cwd, options, want_directory, isdir);
}
}
let mut built_path = cwd.to_path_buf();
for part in &built.parts {
built_path.push(part);
}
let Ok(result) = built_path.read_dir() else {
return completions;
};
let mut entries = Vec::new();
for entry in result.filter_map(|e| e.ok()) {
let entry_name = entry.file_name().to_string_lossy().into_owned();
let entry_isdir = entry.path().is_dir();
let mut built = built.clone();
built.parts.push(entry_name.clone());
built.isdir = entry_isdir;
if !want_directory || entry_isdir {
entries.push((entry_name, built));
let built_paths: Vec<_> = built_paths
.iter()
.map(|built| {
let mut built = built.clone();
built.parts.push(base.to_string());
built.isdir = true;
built
})
.collect();
return complete_rec(rest, &built_paths, options, want_directory, isdir);
}
}
let prefix = partial.first().unwrap_or(&"");
let sorted_entries = sort_completions(prefix, entries, options, |(entry, _)| entry);
let mut matcher = NuMatcher::new(prefix, options.clone());
for (entry_name, built) in sorted_entries {
for built in built_paths {
let mut path = built.cwd.clone();
for part in &built.parts {
path.push(part);
}
let Ok(result) = path.read_dir() else {
continue;
};
for entry in result.filter_map(|e| e.ok()) {
let entry_name = entry.file_name().to_string_lossy().into_owned();
let entry_isdir = entry.path().is_dir();
let mut built = built.clone();
built.parts.push(entry_name.clone());
built.isdir = entry_isdir;
if !want_directory || entry_isdir {
matcher.add(entry_name.clone(), (entry_name, built));
}
}
}
let mut completions = vec![];
for (entry_name, built) in matcher.results() {
match partial.split_first() {
Some((base, rest)) => {
if matches(base, &entry_name, options) {
// We use `isdir` to confirm that the current component has
// at least one next component or a slash.
// Serves as confirmation to ignore longer completions for
// components in between.
if !rest.is_empty() || isdir {
completions.extend(complete_rec(
rest,
&built,
cwd,
options,
want_directory,
isdir,
));
} else {
completions.push(built);
}
// We use `isdir` to confirm that the current component has
// at least one next component or a slash.
// Serves as confirmation to ignore longer completions for
// components in between.
if !rest.is_empty() || isdir {
completions.extend(complete_rec(
rest,
&[built],
options,
want_directory,
isdir,
));
} else {
completions.push(built);
}
if entry_name.eq(base)
&& matches!(options.match_algorithm, MatchAlgorithm::Prefix)
&& isdir
{
break;
// For https://github.com/nushell/nushell/issues/13204
if isdir && options.match_algorithm == MatchAlgorithm::Prefix {
let exact_match = if options.case_sensitive {
entry_name.eq(base)
} else {
entry_name.to_folded_case().eq(&base.to_folded_case())
};
if exact_match {
break;
}
}
}
None => {
@ -147,15 +153,25 @@ fn surround_remove(partial: &str) -> String {
partial.to_string()
}
pub struct FileSuggestion {
pub span: nu_protocol::Span,
pub path: String,
pub style: Option<Style>,
pub cwd: PathBuf,
}
/// # Parameters
/// * `cwds` - A list of directories in which to search. The only reason this isn't a single string
/// is because dotnu_completions searches in multiple directories at once
pub fn complete_item(
want_directory: bool,
span: nu_protocol::Span,
partial: &str,
cwd: &str,
cwds: &[impl AsRef<str>],
options: &CompletionOptions,
engine_state: &EngineState,
stack: &Stack,
) -> Vec<(nu_protocol::Span, String, Option<Style>)> {
) -> Vec<FileSuggestion> {
let cleaned_partial = surround_remove(partial);
let isdir = cleaned_partial.ends_with(is_separator);
let expanded_partial = expand_ndots(Path::new(&cleaned_partial));
@ -175,7 +191,10 @@ pub fn complete_item(
partial.push_str(&format!("{path_separator}."));
}
let cwd_pathbuf = Path::new(cwd).to_path_buf();
let cwd_pathbufs: Vec<_> = cwds
.iter()
.map(|cwd| Path::new(cwd.as_ref()).to_path_buf())
.collect();
let ls_colors = (engine_state.config.completions.use_ls_colors
&& engine_state.config.use_ansi_coloring)
.then(|| {
@ -186,7 +205,7 @@ pub fn complete_item(
get_ls_colors(ls_colors_env_str)
});
let mut cwd = cwd_pathbuf.clone();
let mut cwds = cwd_pathbufs.clone();
let mut prefix_len = 0;
let mut original_cwd = OriginalCwd::None;
@ -194,19 +213,21 @@ pub fn complete_item(
match components.peek().cloned() {
Some(c @ Component::Prefix(..)) => {
// windows only by definition
cwd = [c, Component::RootDir].iter().collect();
cwds = vec![[c, Component::RootDir].iter().collect()];
prefix_len = c.as_os_str().len();
original_cwd = OriginalCwd::Prefix(c.as_os_str().to_string_lossy().into_owned());
}
Some(c @ Component::RootDir) => {
// This is kind of a hack. When joining an empty string with the rest,
// we add the slash automagically
cwd = PathBuf::from(c.as_os_str());
cwds = vec![PathBuf::from(c.as_os_str())];
prefix_len = 1;
original_cwd = OriginalCwd::Prefix(String::new());
}
Some(Component::Normal(home)) if home.to_string_lossy() == "~" => {
cwd = home_dir().map(Into::into).unwrap_or(cwd_pathbuf);
cwds = home_dir()
.map(|dir| vec![dir.into()])
.unwrap_or(cwd_pathbufs);
prefix_len = 1;
original_cwd = OriginalCwd::Home;
}
@ -223,8 +244,14 @@ pub fn complete_item(
complete_rec(
partial.as_slice(),
&PathBuiltFromString::default(),
&cwd,
&cwds
.into_iter()
.map(|cwd| PathBuiltFromString {
cwd,
parts: Vec::new(),
isdir: false,
})
.collect::<Vec<_>>(),
options,
want_directory,
isdir,
@ -234,6 +261,7 @@ pub fn complete_item(
if should_collapse_dots {
p = collapse_ndots(p);
}
let cwd = p.cwd.clone();
let path = original_cwd.apply(p, path_separator);
let style = ls_colors.as_ref().map(|lsc| {
lsc.style_for_path_with_metadata(
@ -245,7 +273,12 @@ pub fn complete_item(
.map(lscolors::Style::to_nu_ansi_term_style)
.unwrap_or_default()
});
(span, escape_path(path, want_directory), style)
FileSuggestion {
span,
path: escape_path(path, want_directory),
style,
cwd,
}
})
.collect()
}
@ -310,45 +343,6 @@ pub fn adjust_if_intermediate(
}
}
/// Convenience function to sort suggestions using [`sort_completions`]
pub fn sort_suggestions(
prefix: &str,
items: Vec<SemanticSuggestion>,
options: &CompletionOptions,
) -> Vec<SemanticSuggestion> {
sort_completions(prefix, items, options, |it| &it.suggestion.value)
}
/// # Arguments
/// * `prefix` - What the user's typed, for sorting by fuzzy matcher score
pub fn sort_completions<T>(
prefix: &str,
mut items: Vec<T>,
options: &CompletionOptions,
get_value: fn(&T) -> &str,
) -> Vec<T> {
// Sort items
if options.sort == CompletionSort::Smart && options.match_algorithm == MatchAlgorithm::Fuzzy {
let mut matcher = SkimMatcherV2::default();
if options.case_sensitive {
matcher = matcher.respect_case();
} else {
matcher = matcher.ignore_case();
};
items.sort_unstable_by(|a, b| {
let a_str = get_value(a);
let b_str = get_value(b);
let a_score = matcher.fuzzy_match(a_str, prefix).unwrap_or_default();
let b_score = matcher.fuzzy_match(b_str, prefix).unwrap_or_default();
b_score.cmp(&a_score).then(a_str.cmp(b_str))
});
} else {
items.sort_unstable_by(|a, b| get_value(a).cmp(get_value(b)));
}
items
}
/// Collapse multiple ".." components into n-dots.
///
/// It performs the reverse operation of `expand_ndots`, collapsing sequences of ".." into n-dots,
@ -359,6 +353,7 @@ fn collapse_ndots(path: PathBuiltFromString) -> PathBuiltFromString {
let mut result = PathBuiltFromString {
parts: Vec::with_capacity(path.parts.len()),
isdir: path.isdir,
cwd: path.cwd,
};
let mut dot_count = 0;

View File

@ -1,7 +1,10 @@
use fuzzy_matcher::{skim::SkimMatcherV2, FuzzyMatcher};
use nu_parser::trim_quotes_str;
use nu_protocol::{CompletionAlgorithm, CompletionSort};
use std::fmt::Display;
use nu_utils::IgnoreCaseExt;
use std::{borrow::Cow, fmt::Display};
use super::SemanticSuggestion;
/// Describes how suggestions should be matched.
#[derive(Copy, Clone, Debug, PartialEq)]
@ -19,33 +22,154 @@ pub enum MatchAlgorithm {
Fuzzy,
}
impl MatchAlgorithm {
/// Returns whether the `needle` search text matches the given `haystack`.
pub fn matches_str(&self, haystack: &str, needle: &str) -> bool {
let haystack = trim_quotes_str(haystack);
let needle = trim_quotes_str(needle);
match *self {
MatchAlgorithm::Prefix => haystack.starts_with(needle),
pub struct NuMatcher<T> {
options: CompletionOptions,
needle: String,
state: State<T>,
}
enum State<T> {
Prefix {
/// Holds (haystack, item)
items: Vec<(String, T)>,
},
Fuzzy {
matcher: Box<SkimMatcherV2>,
/// Holds (haystack, item, score)
items: Vec<(String, T, i64)>,
},
}
/// Filters and sorts suggestions
impl<T> NuMatcher<T> {
/// # Arguments
///
/// * `needle` - The text to search for
pub fn new(needle: impl AsRef<str>, options: CompletionOptions) -> NuMatcher<T> {
let orig_needle = trim_quotes_str(needle.as_ref());
let lowercase_needle = if options.case_sensitive {
orig_needle.to_owned()
} else {
orig_needle.to_folded_case()
};
match options.match_algorithm {
MatchAlgorithm::Prefix => NuMatcher {
options,
needle: lowercase_needle,
state: State::Prefix { items: Vec::new() },
},
MatchAlgorithm::Fuzzy => {
let matcher = SkimMatcherV2::default();
matcher.fuzzy_match(haystack, needle).is_some()
let mut matcher = SkimMatcherV2::default();
if options.case_sensitive {
matcher = matcher.respect_case();
} else {
matcher = matcher.ignore_case();
};
NuMatcher {
options,
needle: orig_needle.to_owned(),
state: State::Fuzzy {
matcher: Box::new(matcher),
items: Vec::new(),
},
}
}
}
}
/// Returns whether the `needle` search text matches the given `haystack`.
pub fn matches_u8(&self, haystack: &[u8], needle: &[u8]) -> bool {
match *self {
MatchAlgorithm::Prefix => haystack.starts_with(needle),
MatchAlgorithm::Fuzzy => {
let haystack_str = String::from_utf8_lossy(haystack);
let needle_str = String::from_utf8_lossy(needle);
let matcher = SkimMatcherV2::default();
matcher.fuzzy_match(&haystack_str, &needle_str).is_some()
/// Returns whether or not the haystack matches the needle. If it does, `item` is added
/// to the list of matches (if given).
///
/// Helper to avoid code duplication between [NuMatcher::add] and [NuMatcher::matches].
fn matches_aux(&mut self, haystack: &str, item: Option<T>) -> bool {
let haystack = trim_quotes_str(haystack);
match &mut self.state {
State::Prefix { items } => {
let haystack_folded = if self.options.case_sensitive {
Cow::Borrowed(haystack)
} else {
Cow::Owned(haystack.to_folded_case())
};
let matches = if self.options.positional {
haystack_folded.starts_with(self.needle.as_str())
} else {
haystack_folded.contains(self.needle.as_str())
};
if matches {
if let Some(item) = item {
items.push((haystack.to_string(), item));
}
}
matches
}
State::Fuzzy { items, matcher } => {
let Some(score) = matcher.fuzzy_match(haystack, &self.needle) else {
return false;
};
if let Some(item) = item {
items.push((haystack.to_string(), item, score));
}
true
}
}
}
/// Add the given item if the given haystack matches the needle.
///
/// Returns whether the item was added.
pub fn add(&mut self, haystack: impl AsRef<str>, item: T) -> bool {
self.matches_aux(haystack.as_ref(), Some(item))
}
/// Returns whether the haystack matches the needle.
pub fn matches(&mut self, haystack: &str) -> bool {
self.matches_aux(haystack, None)
}
/// Get all the items that matched (sorted)
pub fn results(self) -> Vec<T> {
match self.state {
State::Prefix { mut items, .. } => {
items.sort_by(|(haystack1, _), (haystack2, _)| {
let cmp_sensitive = haystack1.cmp(haystack2);
if self.options.case_sensitive {
cmp_sensitive
} else {
haystack1
.to_folded_case()
.cmp(&haystack2.to_folded_case())
.then(cmp_sensitive)
}
});
items.into_iter().map(|(_, item)| item).collect::<Vec<_>>()
}
State::Fuzzy { mut items, .. } => {
match self.options.sort {
CompletionSort::Alphabetical => {
items.sort_by(|(haystack1, _, _), (haystack2, _, _)| {
haystack1.cmp(haystack2)
});
}
CompletionSort::Smart => {
items.sort_by(|(haystack1, _, score1), (haystack2, _, score2)| {
score2.cmp(score1).then(haystack1.cmp(haystack2))
});
}
}
items
.into_iter()
.map(|(_, item, _)| item)
.collect::<Vec<_>>()
}
}
}
}
impl NuMatcher<SemanticSuggestion> {
pub fn add_semantic_suggestion(&mut self, sugg: SemanticSuggestion) -> bool {
let value = sugg.suggestion.value.to_string();
self.add(value, sugg)
}
}
impl From<CompletionAlgorithm> for MatchAlgorithm {
@ -105,35 +229,49 @@ impl Default for CompletionOptions {
#[cfg(test)]
mod test {
use super::MatchAlgorithm;
use rstest::rstest;
#[test]
fn match_algorithm_prefix() {
let algorithm = MatchAlgorithm::Prefix;
use super::{CompletionOptions, MatchAlgorithm, NuMatcher};
assert!(algorithm.matches_str("example text", ""));
assert!(algorithm.matches_str("example text", "examp"));
assert!(!algorithm.matches_str("example text", "text"));
assert!(algorithm.matches_u8(&[1, 2, 3], &[]));
assert!(algorithm.matches_u8(&[1, 2, 3], &[1, 2]));
assert!(!algorithm.matches_u8(&[1, 2, 3], &[2, 3]));
#[rstest]
#[case(MatchAlgorithm::Prefix, "example text", "", true)]
#[case(MatchAlgorithm::Prefix, "example text", "examp", true)]
#[case(MatchAlgorithm::Prefix, "example text", "text", false)]
#[case(MatchAlgorithm::Fuzzy, "example text", "", true)]
#[case(MatchAlgorithm::Fuzzy, "example text", "examp", true)]
#[case(MatchAlgorithm::Fuzzy, "example text", "ext", true)]
#[case(MatchAlgorithm::Fuzzy, "example text", "mplxt", true)]
#[case(MatchAlgorithm::Fuzzy, "example text", "mpp", false)]
fn match_algorithm_simple(
#[case] match_algorithm: MatchAlgorithm,
#[case] haystack: &str,
#[case] needle: &str,
#[case] should_match: bool,
) {
let options = CompletionOptions {
match_algorithm,
..Default::default()
};
let mut matcher = NuMatcher::new(needle, options);
matcher.add(haystack, haystack);
if should_match {
assert_eq!(vec![haystack], matcher.results());
} else {
assert_ne!(vec![haystack], matcher.results());
}
}
#[test]
fn match_algorithm_fuzzy() {
let algorithm = MatchAlgorithm::Fuzzy;
assert!(algorithm.matches_str("example text", ""));
assert!(algorithm.matches_str("example text", "examp"));
assert!(algorithm.matches_str("example text", "ext"));
assert!(algorithm.matches_str("example text", "mplxt"));
assert!(!algorithm.matches_str("example text", "mpp"));
assert!(algorithm.matches_u8(&[1, 2, 3], &[]));
assert!(algorithm.matches_u8(&[1, 2, 3], &[1, 2]));
assert!(algorithm.matches_u8(&[1, 2, 3], &[2, 3]));
assert!(algorithm.matches_u8(&[1, 2, 3], &[1, 3]));
assert!(!algorithm.matches_u8(&[1, 2, 3], &[2, 2]));
fn match_algorithm_fuzzy_sort_score() {
let options = CompletionOptions {
match_algorithm: MatchAlgorithm::Fuzzy,
..Default::default()
};
let mut matcher = NuMatcher::new("fob", options);
for item in ["foo/bar", "fob", "foo bar"] {
matcher.add(item, item);
}
// Sort by score, then in alphabetical order
assert_eq!(vec!["fob", "foo bar", "foo/bar"], matcher.results());
}
}

View File

@ -1,18 +1,16 @@
use crate::completions::{
completer::map_value_completions, Completer, CompletionOptions, MatchAlgorithm,
SemanticSuggestion,
completer::map_value_completions, Completer, CompletionOptions, SemanticSuggestion,
};
use nu_engine::eval_call;
use nu_protocol::{
ast::{Argument, Call, Expr, Expression},
debugger::WithoutDebug,
engine::{Stack, StateWorkingSet},
CompletionSort, DeclId, PipelineData, Span, Type, Value,
DeclId, PipelineData, Span, Type, Value,
};
use nu_utils::IgnoreCaseExt;
use std::collections::HashMap;
use super::completion_common::sort_suggestions;
use super::completion_options::NuMatcher;
pub struct CustomCompletion {
stack: Stack,
@ -69,6 +67,7 @@ impl Completer for CustomCompletion {
);
let mut custom_completion_options = None;
let mut should_sort = true;
// Parse result
let suggestions = result
@ -86,10 +85,9 @@ impl Completer for CustomCompletion {
let options = val.get("options");
if let Some(Value::Record { val: options, .. }) = &options {
let should_sort = options
.get("sort")
.and_then(|val| val.as_bool().ok())
.unwrap_or(false);
if let Some(sort) = options.get("sort").and_then(|val| val.as_bool().ok()) {
should_sort = sort;
}
custom_completion_options = Some(CompletionOptions {
case_sensitive: options
@ -99,20 +97,16 @@ impl Completer for CustomCompletion {
positional: options
.get("positional")
.and_then(|val| val.as_bool().ok())
.unwrap_or(true),
.unwrap_or(completion_options.positional),
match_algorithm: match options.get("completion_algorithm") {
Some(option) => option
.coerce_string()
.ok()
.and_then(|option| option.try_into().ok())
.unwrap_or(MatchAlgorithm::Prefix),
.unwrap_or(completion_options.match_algorithm),
None => completion_options.match_algorithm,
},
sort: if should_sort {
CompletionSort::Alphabetical
} else {
CompletionSort::Smart
},
sort: completion_options.sort,
});
}
@ -123,41 +117,19 @@ impl Completer for CustomCompletion {
})
.unwrap_or_default();
let options = custom_completion_options
.as_ref()
.unwrap_or(completion_options);
let suggestions = filter(prefix, suggestions, options);
sort_suggestions(&String::from_utf8_lossy(prefix), suggestions, options)
let options = custom_completion_options.unwrap_or(completion_options.clone());
let mut matcher = NuMatcher::new(String::from_utf8_lossy(prefix), options);
if should_sort {
for sugg in suggestions {
matcher.add_semantic_suggestion(sugg);
}
matcher.results()
} else {
suggestions
.into_iter()
.filter(|sugg| matcher.matches(&sugg.suggestion.value))
.collect()
}
}
}
fn filter(
prefix: &[u8],
items: Vec<SemanticSuggestion>,
options: &CompletionOptions,
) -> Vec<SemanticSuggestion> {
items
.into_iter()
.filter(|it| match options.match_algorithm {
MatchAlgorithm::Prefix => match (options.case_sensitive, options.positional) {
(true, true) => it.suggestion.value.as_bytes().starts_with(prefix),
(true, false) => it
.suggestion
.value
.contains(std::str::from_utf8(prefix).unwrap_or("")),
(false, positional) => {
let value = it.suggestion.value.to_folded_case();
let prefix = std::str::from_utf8(prefix).unwrap_or("").to_folded_case();
if positional {
value.starts_with(&prefix)
} else {
value.contains(&prefix)
}
}
},
MatchAlgorithm::Fuzzy => options
.match_algorithm
.matches_u8(it.suggestion.value.as_bytes(), prefix),
})
.collect()
}

View File

@ -2,7 +2,6 @@ use crate::completions::{
completion_common::{adjust_if_intermediate, complete_item, AdjustView},
Completer, CompletionOptions,
};
use nu_ansi_term::Style;
use nu_protocol::{
engine::{EngineState, Stack, StateWorkingSet},
Span,
@ -10,7 +9,7 @@ use nu_protocol::{
use reedline::Suggestion;
use std::path::Path;
use super::SemanticSuggestion;
use super::{completion_common::FileSuggestion, SemanticSuggestion};
#[derive(Clone, Default)]
pub struct DirectoryCompletion {}
@ -47,11 +46,11 @@ impl Completer for DirectoryCompletion {
.into_iter()
.map(move |x| SemanticSuggestion {
suggestion: Suggestion {
value: x.1,
style: x.2,
value: x.path,
style: x.style,
span: reedline::Span {
start: x.0.start - offset,
end: x.0.end - offset,
start: x.span.start - offset,
end: x.span.end - offset,
},
..Suggestion::default()
},
@ -92,6 +91,6 @@ pub fn directory_completion(
options: &CompletionOptions,
engine_state: &EngineState,
stack: &Stack,
) -> Vec<(nu_protocol::Span, String, Option<Style>)> {
complete_item(true, span, partial, cwd, options, engine_state, stack)
) -> Vec<FileSuggestion> {
complete_item(true, span, partial, &[cwd], options, engine_state, stack)
}

View File

@ -6,7 +6,7 @@ use nu_protocol::{
use reedline::Suggestion;
use std::path::{is_separator, Path, MAIN_SEPARATOR as SEP, MAIN_SEPARATOR_STR};
use super::{completion_common::sort_suggestions, SemanticSuggestion};
use super::SemanticSuggestion;
#[derive(Clone, Default)]
pub struct DotNuCompletion {}
@ -87,49 +87,44 @@ impl Completer for DotNuCompletion {
// Fetch the files filtering the ones that ends with .nu
// and transform them into suggestions
let output: Vec<SemanticSuggestion> = search_dirs
.into_iter()
.flat_map(|search_dir| {
let completions = file_path_completion(
span,
&partial,
&search_dir,
options,
working_set.permanent_state,
stack,
);
completions
.into_iter()
.filter(move |it| {
// Different base dir, so we list the .nu files or folders
if !is_current_folder {
it.1.ends_with(".nu") || it.1.ends_with(SEP)
} else {
// Lib dirs, so we filter only the .nu files or directory modules
if it.1.ends_with(SEP) {
Path::new(&search_dir).join(&it.1).join("mod.nu").exists()
} else {
it.1.ends_with(".nu")
}
}
})
.map(move |x| SemanticSuggestion {
suggestion: Suggestion {
value: x.1,
style: x.2,
span: reedline::Span {
start: x.0.start - offset,
end: x.0.end - offset,
},
append_whitespace: true,
..Suggestion::default()
},
// TODO????
kind: None,
})
})
.collect();
sort_suggestions(&prefix_str, output, options)
let completions = file_path_completion(
span,
&partial,
&search_dirs.iter().map(|d| d.as_str()).collect::<Vec<_>>(),
options,
working_set.permanent_state,
stack,
);
completions
.into_iter()
.filter(move |it| {
// Different base dir, so we list the .nu files or folders
if !is_current_folder {
it.path.ends_with(".nu") || it.path.ends_with(SEP)
} else {
// Lib dirs, so we filter only the .nu files or directory modules
if it.path.ends_with(SEP) {
Path::new(&it.cwd).join(&it.path).join("mod.nu").exists()
} else {
it.path.ends_with(".nu")
}
}
})
.map(move |x| SemanticSuggestion {
suggestion: Suggestion {
value: x.path,
style: x.style,
span: reedline::Span {
start: x.span.start - offset,
end: x.span.end - offset,
},
append_whitespace: true,
..Suggestion::default()
},
// TODO????
kind: None,
})
.collect::<Vec<_>>()
}
}

View File

@ -2,16 +2,14 @@ use crate::completions::{
completion_common::{adjust_if_intermediate, complete_item, AdjustView},
Completer, CompletionOptions,
};
use nu_ansi_term::Style;
use nu_protocol::{
engine::{EngineState, Stack, StateWorkingSet},
Span,
};
use nu_utils::IgnoreCaseExt;
use reedline::Suggestion;
use std::path::Path;
use super::SemanticSuggestion;
use super::{completion_common::FileSuggestion, SemanticSuggestion};
#[derive(Clone, Default)]
pub struct FileCompletion {}
@ -44,7 +42,7 @@ impl Completer for FileCompletion {
readjusted,
span,
&prefix,
&working_set.permanent_state.current_work_dir(),
&[&working_set.permanent_state.current_work_dir()],
options,
working_set.permanent_state,
stack,
@ -52,11 +50,11 @@ impl Completer for FileCompletion {
.into_iter()
.map(move |x| SemanticSuggestion {
suggestion: Suggestion {
value: x.1,
style: x.2,
value: x.path,
style: x.style,
span: reedline::Span {
start: x.0.start - offset,
end: x.0.end - offset,
start: x.span.start - offset,
end: x.span.end - offset,
},
..Suggestion::default()
},
@ -95,21 +93,10 @@ impl Completer for FileCompletion {
pub fn file_path_completion(
span: nu_protocol::Span,
partial: &str,
cwd: &str,
cwds: &[impl AsRef<str>],
options: &CompletionOptions,
engine_state: &EngineState,
stack: &Stack,
) -> Vec<(nu_protocol::Span, String, Option<Style>)> {
complete_item(false, span, partial, cwd, options, engine_state, stack)
}
pub fn matches(partial: &str, from: &str, options: &CompletionOptions) -> bool {
// Check for case sensitive
if !options.case_sensitive {
return options
.match_algorithm
.matches_str(&from.to_folded_case(), &partial.to_folded_case());
}
options.match_algorithm.matches_str(from, partial)
) -> Vec<FileSuggestion> {
complete_item(false, span, partial, cwds, options, engine_state, stack)
}

View File

@ -1,4 +1,4 @@
use crate::completions::{completion_common::sort_suggestions, Completer, CompletionOptions};
use crate::completions::{completion_options::NuMatcher, Completer, CompletionOptions};
use nu_protocol::{
ast::{Expr, Expression},
engine::{Stack, StateWorkingSet},
@ -35,7 +35,7 @@ impl Completer for FlagCompletion {
let decl = working_set.get_decl(call.decl_id);
let sig = decl.signature();
let mut output = vec![];
let mut matcher = NuMatcher::new(String::from_utf8_lossy(prefix), options.clone());
for named in &sig.named {
let flag_desc = &named.desc;
@ -44,34 +44,7 @@ impl Completer for FlagCompletion {
short.encode_utf8(&mut named);
named.insert(0, b'-');
if options.match_algorithm.matches_u8(&named, prefix) {
output.push(SemanticSuggestion {
suggestion: Suggestion {
value: String::from_utf8_lossy(&named).to_string(),
description: Some(flag_desc.to_string()),
span: reedline::Span {
start: span.start - offset,
end: span.end - offset,
},
append_whitespace: true,
..Suggestion::default()
},
// TODO????
kind: None,
});
}
}
if named.long.is_empty() {
continue;
}
let mut named = named.long.as_bytes().to_vec();
named.insert(0, b'-');
named.insert(0, b'-');
if options.match_algorithm.matches_u8(&named, prefix) {
output.push(SemanticSuggestion {
matcher.add_semantic_suggestion(SemanticSuggestion {
suggestion: Suggestion {
value: String::from_utf8_lossy(&named).to_string(),
description: Some(flag_desc.to_string()),
@ -86,9 +59,32 @@ impl Completer for FlagCompletion {
kind: None,
});
}
if named.long.is_empty() {
continue;
}
let mut named = named.long.as_bytes().to_vec();
named.insert(0, b'-');
named.insert(0, b'-');
matcher.add_semantic_suggestion(SemanticSuggestion {
suggestion: Suggestion {
value: String::from_utf8_lossy(&named).to_string(),
description: Some(flag_desc.to_string()),
span: reedline::Span {
start: span.start - offset,
end: span.end - offset,
},
append_whitespace: true,
..Suggestion::default()
},
// TODO????
kind: None,
});
}
return sort_suggestions(&String::from_utf8_lossy(prefix), output, options);
return matcher.results();
}
vec![]

View File

@ -18,7 +18,7 @@ pub use completion_options::{CompletionOptions, MatchAlgorithm};
pub use custom_completions::CustomCompletion;
pub use directory_completions::DirectoryCompletion;
pub use dotnu_completions::DotNuCompletion;
pub use file_completions::{file_path_completion, matches, FileCompletion};
pub use file_completions::{file_path_completion, FileCompletion};
pub use flag_completions::FlagCompletion;
pub use operator_completions::OperatorCompletion;
pub use variable_completions::VariableCompletion;

View File

@ -1,5 +1,5 @@
use crate::completions::{
Completer, CompletionOptions, MatchAlgorithm, SemanticSuggestion, SuggestionKind,
completion_options::NuMatcher, Completer, CompletionOptions, SemanticSuggestion, SuggestionKind,
};
use nu_protocol::{
ast::{Expr, Expression},
@ -28,7 +28,7 @@ impl Completer for OperatorCompletion {
span: Span,
offset: usize,
_pos: usize,
_options: &CompletionOptions,
options: &CompletionOptions,
) -> Vec<SemanticSuggestion> {
//Check if int, float, or string
let partial = std::str::from_utf8(working_set.get_span_contents(span)).unwrap_or("");
@ -60,17 +60,15 @@ impl Completer for OperatorCompletion {
("bit-shr", "Bitwise shift right"),
("in", "Is a member of (doesn't use regex)"),
("not-in", "Is not a member of (doesn't use regex)"),
(
"++",
"Appends two lists, a list and a value, two strings, or two binary values",
),
],
Expr::String(_) => vec![
("=~", "Contains regex match"),
("like", "Contains regex match"),
("!~", "Does not contain regex match"),
("not-like", "Does not contain regex match"),
(
"++",
"Appends two lists, a list and a value, two strings, or two binary values",
"Concatenates two lists, two strings, or two binary values",
),
("in", "Is a member of (doesn't use regex)"),
("not-in", "Is not a member of (doesn't use regex)"),
@ -93,10 +91,6 @@ impl Completer for OperatorCompletion {
("**", "Power of"),
("in", "Is a member of (doesn't use regex)"),
("not-in", "Is not a member of (doesn't use regex)"),
(
"++",
"Appends two lists, a list and a value, two strings, or two binary values",
),
],
Expr::Bool(_) => vec![
(
@ -111,15 +105,11 @@ impl Completer for OperatorCompletion {
("not", "Negates a value or expression"),
("in", "Is a member of (doesn't use regex)"),
("not-in", "Is not a member of (doesn't use regex)"),
(
"++",
"Appends two lists, a list and a value, two strings, or two binary values",
),
],
Expr::FullCellPath(path) => match path.head.expr {
Expr::List(_) => vec![(
"++",
"Appends two lists, a list and a value, two strings, or two binary values",
"Concatenates two lists, two strings, or two binary values",
)],
Expr::Var(id) => get_variable_completions(id, working_set),
_ => vec![],
@ -127,17 +117,12 @@ impl Completer for OperatorCompletion {
_ => vec![],
};
let match_algorithm = MatchAlgorithm::Prefix;
let input_fuzzy_search =
|(operator, _): &(&str, &str)| match_algorithm.matches_str(operator, partial);
possible_operations
.into_iter()
.filter(input_fuzzy_search)
.map(move |x| SemanticSuggestion {
let mut matcher = NuMatcher::new(partial, options.clone());
for (symbol, desc) in possible_operations.into_iter() {
matcher.add_semantic_suggestion(SemanticSuggestion {
suggestion: Suggestion {
value: x.0.to_string(),
description: Some(x.1.to_string()),
value: symbol.to_string(),
description: Some(desc.to_string()),
span: reedline::Span::new(span.start - offset, span.end - offset),
append_whitespace: true,
..Suggestion::default()
@ -145,8 +130,9 @@ impl Completer for OperatorCompletion {
kind: Some(SuggestionKind::Command(
nu_protocol::engine::CommandType::Builtin,
)),
})
.collect()
});
}
matcher.results()
}
}
@ -163,7 +149,7 @@ pub fn get_variable_completions<'a>(
Type::List(_) | Type::String | Type::Binary => vec![
(
"++=",
"Appends a list, a value, a string, or a binary value to a variable.",
"Concatenates two lists, two strings, or two binary values",
),
("=", "Assigns a value to a variable."),
],

View File

@ -1,6 +1,4 @@
use crate::completions::{
Completer, CompletionOptions, MatchAlgorithm, SemanticSuggestion, SuggestionKind,
};
use crate::completions::{Completer, CompletionOptions, SemanticSuggestion, SuggestionKind};
use nu_engine::{column::get_columns, eval_variable};
use nu_protocol::{
engine::{Stack, StateWorkingSet},
@ -9,7 +7,7 @@ use nu_protocol::{
use reedline::Suggestion;
use std::str;
use super::completion_common::sort_suggestions;
use super::completion_options::NuMatcher;
#[derive(Clone)]
pub struct VariableCompletion {
@ -33,7 +31,6 @@ impl Completer for VariableCompletion {
_pos: usize,
options: &CompletionOptions,
) -> Vec<SemanticSuggestion> {
let mut output = vec![];
let builtins = ["$nu", "$in", "$env"];
let var_str = std::str::from_utf8(&self.var_context.0).unwrap_or("");
let var_id = working_set.find_variable(&self.var_context.0);
@ -43,6 +40,7 @@ impl Completer for VariableCompletion {
};
let sublevels_count = self.var_context.1.len();
let prefix_str = String::from_utf8_lossy(prefix);
let mut matcher = NuMatcher::new(prefix_str, options.clone());
// Completions for the given variable
if !var_str.is_empty() {
@ -63,37 +61,25 @@ impl Completer for VariableCompletion {
if let Some(val) = env_vars.get(&target_var_str) {
for suggestion in nested_suggestions(val, &nested_levels, current_span) {
if options.match_algorithm.matches_u8_insensitive(
options.case_sensitive,
suggestion.suggestion.value.as_bytes(),
prefix,
) {
output.push(suggestion);
}
matcher.add_semantic_suggestion(suggestion);
}
return sort_suggestions(&prefix_str, output, options);
return matcher.results();
}
} else {
// No nesting provided, return all env vars
for env_var in env_vars {
if options.match_algorithm.matches_u8_insensitive(
options.case_sensitive,
env_var.0.as_bytes(),
prefix,
) {
output.push(SemanticSuggestion {
suggestion: Suggestion {
value: env_var.0,
span: current_span,
..Suggestion::default()
},
kind: Some(SuggestionKind::Type(env_var.1.get_type())),
});
}
matcher.add_semantic_suggestion(SemanticSuggestion {
suggestion: Suggestion {
value: env_var.0,
span: current_span,
..Suggestion::default()
},
kind: Some(SuggestionKind::Type(env_var.1.get_type())),
});
}
return sort_suggestions(&prefix_str, output, options);
return matcher.results();
}
}
@ -108,16 +94,10 @@ impl Completer for VariableCompletion {
) {
for suggestion in nested_suggestions(&nuval, &self.var_context.1, current_span)
{
if options.match_algorithm.matches_u8_insensitive(
options.case_sensitive,
suggestion.suggestion.value.as_bytes(),
prefix,
) {
output.push(suggestion);
}
matcher.add_semantic_suggestion(suggestion);
}
return sort_suggestions(&prefix_str, output, options);
return matcher.results();
}
}
@ -130,37 +110,25 @@ impl Completer for VariableCompletion {
if let Ok(value) = var {
for suggestion in nested_suggestions(&value, &self.var_context.1, current_span)
{
if options.match_algorithm.matches_u8_insensitive(
options.case_sensitive,
suggestion.suggestion.value.as_bytes(),
prefix,
) {
output.push(suggestion);
}
matcher.add_semantic_suggestion(suggestion);
}
return sort_suggestions(&prefix_str, output, options);
return matcher.results();
}
}
}
// Variable completion (e.g: $en<tab> to complete $env)
for builtin in builtins {
if options.match_algorithm.matches_u8_insensitive(
options.case_sensitive,
builtin.as_bytes(),
prefix,
) {
output.push(SemanticSuggestion {
suggestion: Suggestion {
value: builtin.to_string(),
span: current_span,
..Suggestion::default()
},
// TODO is there a way to get the VarId to get the type???
kind: None,
});
}
matcher.add_semantic_suggestion(SemanticSuggestion {
suggestion: Suggestion {
value: builtin.to_string(),
span: current_span,
..Suggestion::default()
},
// TODO is there a way to get the VarId to get the type???
kind: None,
});
}
// TODO: The following can be refactored (see find_commands_by_predicate() used in
@ -170,40 +138,7 @@ impl Completer for VariableCompletion {
for scope_frame in working_set.delta.scope.iter().rev() {
for overlay_frame in scope_frame.active_overlays(&mut removed_overlays).rev() {
for v in &overlay_frame.vars {
if options.match_algorithm.matches_u8_insensitive(
options.case_sensitive,
v.0,
prefix,
) {
output.push(SemanticSuggestion {
suggestion: Suggestion {
value: String::from_utf8_lossy(v.0).to_string(),
span: current_span,
..Suggestion::default()
},
kind: Some(SuggestionKind::Type(
working_set.get_variable(*v.1).ty.clone(),
)),
});
}
}
}
}
// Permanent state vars
// for scope in &self.engine_state.scope {
for overlay_frame in working_set
.permanent_state
.active_overlays(&removed_overlays)
.rev()
{
for v in &overlay_frame.vars {
if options.match_algorithm.matches_u8_insensitive(
options.case_sensitive,
v.0,
prefix,
) {
output.push(SemanticSuggestion {
matcher.add_semantic_suggestion(SemanticSuggestion {
suggestion: Suggestion {
value: String::from_utf8_lossy(v.0).to_string(),
span: current_span,
@ -217,11 +152,28 @@ impl Completer for VariableCompletion {
}
}
output = sort_suggestions(&prefix_str, output, options);
// Permanent state vars
// for scope in &self.engine_state.scope {
for overlay_frame in working_set
.permanent_state
.active_overlays(&removed_overlays)
.rev()
{
for v in &overlay_frame.vars {
matcher.add_semantic_suggestion(SemanticSuggestion {
suggestion: Suggestion {
value: String::from_utf8_lossy(v.0).to_string(),
span: current_span,
..Suggestion::default()
},
kind: Some(SuggestionKind::Type(
working_set.get_variable(*v.1).ty.clone(),
)),
});
}
}
output.dedup(); // TODO: Removes only consecutive duplicates, is it intended?
output
matcher.results()
}
}
@ -302,13 +254,3 @@ fn recursive_value(val: &Value, sublevels: &[Vec<u8>]) -> Result<Value, Span> {
Ok(val.clone())
}
}
impl MatchAlgorithm {
pub fn matches_u8_insensitive(&self, sensitive: bool, haystack: &[u8], needle: &[u8]) -> bool {
if sensitive {
self.matches_u8(haystack, needle)
} else {
self.matches_u8(&haystack.to_ascii_lowercase(), &needle.to_ascii_lowercase())
}
}
}

View File

@ -9,6 +9,8 @@ use nu_protocol::{
};
use std::sync::Arc;
use crate::util::print_pipeline;
#[derive(Default)]
pub struct EvaluateCommandsOpts {
pub table_mode: Option<Value>,
@ -72,7 +74,7 @@ pub fn evaluate_commands(
if let Some(err) = working_set.compile_errors.first() {
report_compile_error(&working_set, err);
// Not a fatal error, for now
std::process::exit(1);
}
(output, working_set.render())
@ -93,7 +95,7 @@ pub fn evaluate_commands(
t_mode.coerce_str()?.parse().unwrap_or_default();
}
pipeline.print(engine_state, stack, no_newline, false)?;
print_pipeline(engine_state, stack, pipeline, no_newline)?;
info!("evaluate {}:{}:{}", file!(), line!(), column!());

View File

@ -1,4 +1,4 @@
use crate::util::eval_source;
use crate::util::{eval_source, print_pipeline};
use log::{info, trace};
use nu_engine::{convert_env_values, eval_block};
use nu_parser::parse;
@ -89,7 +89,7 @@ pub fn evaluate_file(
if let Some(err) = working_set.compile_errors.first() {
report_compile_error(&working_set, err);
// Not a fatal error, for now
std::process::exit(1);
}
// Look for blocks whose name starts with "main" and replace it with the filename.
@ -119,7 +119,7 @@ pub fn evaluate_file(
};
// Print the pipeline output of the last command of the file.
pipeline.print(engine_state, stack, true, false)?;
print_pipeline(engine_state, stack, pipeline, true)?;
// Invoke the main command with arguments.
// Arguments with whitespaces are quoted, thus can be safely concatenated by whitespace.

View File

@ -65,8 +65,12 @@ Since this command has no output, there is no point in piping it with other comm
arg.into_pipeline_data()
.print_raw(engine_state, no_newline, to_stderr)?;
} else {
arg.into_pipeline_data()
.print(engine_state, stack, no_newline, to_stderr)?;
arg.into_pipeline_data().print_table(
engine_state,
stack,
no_newline,
to_stderr,
)?;
}
}
} else if !input.is_nothing() {
@ -78,7 +82,7 @@ Since this command has no output, there is no point in piping it with other comm
if raw {
input.print_raw(engine_state, no_newline, to_stderr)?;
} else {
input.print(engine_state, stack, no_newline, to_stderr)?;
input.print_table(engine_state, stack, no_newline, to_stderr)?;
}
}

View File

@ -1,5 +1,5 @@
use crate::NushellPrompt;
use log::trace;
use log::{trace, warn};
use nu_engine::ClosureEvalOnce;
use nu_protocol::{
engine::{EngineState, Stack},
@ -30,30 +30,21 @@ pub(crate) const TRANSIENT_PROMPT_MULTILINE_INDICATOR: &str =
pub(crate) const PRE_PROMPT_MARKER: &str = "\x1b]133;A\x1b\\";
pub(crate) const POST_PROMPT_MARKER: &str = "\x1b]133;B\x1b\\";
pub(crate) const PRE_EXECUTION_MARKER: &str = "\x1b]133;C\x1b\\";
#[allow(dead_code)]
pub(crate) const POST_EXECUTION_MARKER_PREFIX: &str = "\x1b]133;D;";
#[allow(dead_code)]
pub(crate) const POST_EXECUTION_MARKER_SUFFIX: &str = "\x1b\\";
// OSC633 is the same as OSC133 but specifically for VSCode
pub(crate) const VSCODE_PRE_PROMPT_MARKER: &str = "\x1b]633;A\x1b\\";
pub(crate) const VSCODE_POST_PROMPT_MARKER: &str = "\x1b]633;B\x1b\\";
#[allow(dead_code)]
pub(crate) const VSCODE_PRE_EXECUTION_MARKER: &str = "\x1b]633;C\x1b\\";
#[allow(dead_code)]
//"\x1b]633;D;{}\x1b\\"
pub(crate) const VSCODE_POST_EXECUTION_MARKER_PREFIX: &str = "\x1b]633;D;";
#[allow(dead_code)]
pub(crate) const VSCODE_POST_EXECUTION_MARKER_SUFFIX: &str = "\x1b\\";
#[allow(dead_code)]
//"\x1b]633;E;{}\x1b\\"
pub(crate) const VSCODE_COMMANDLINE_MARKER_PREFIX: &str = "\x1b]633;E;";
#[allow(dead_code)]
pub(crate) const VSCODE_COMMANDLINE_MARKER_SUFFIX: &str = "\x1b\\";
#[allow(dead_code)]
// "\x1b]633;P;Cwd={}\x1b\\"
pub(crate) const VSCODE_CWD_PROPERTY_MARKER_PREFIX: &str = "\x1b]633;P;Cwd=";
#[allow(dead_code)]
pub(crate) const VSCODE_CWD_PROPERTY_MARKER_SUFFIX: &str = "\x1b\\";
pub(crate) const RESET_APPLICATION_MODE: &str = "\x1b[?1l";
@ -89,8 +80,13 @@ fn get_prompt_string(
})
.and_then(|pipeline_data| {
let output = pipeline_data.collect_string("", config).ok();
let ansi_output = output.map(|mut x| {
// Always reset the color at the start of the right prompt
// to ensure there is no ansi bleed over
if x.is_empty() && prompt == PROMPT_COMMAND_RIGHT {
x.insert_str(0, "\x1b[0m")
};
output.map(|mut x| {
// Just remove the very last newline.
if x.ends_with('\n') {
x.pop();
@ -100,7 +96,11 @@ fn get_prompt_string(
x.pop();
}
x
})
});
// Let's keep this for debugging purposes with nu --log-level warn
warn!("{}:{}:{} {:?}", file!(), line!(), column!(), ansi_output);
ansi_output
})
}

View File

@ -858,10 +858,10 @@ fn add_parsed_keybinding(
c if c.starts_with('f') => c[1..]
.parse()
.ok()
.filter(|num| (1..=20).contains(num))
.filter(|num| (1..=35).contains(num))
.map(KeyCode::F)
.ok_or(ShellError::InvalidValue {
valid: "'f1', 'f2', ..., or 'f20'".into(),
valid: "'f1', 'f2', ..., or 'f35'".into(),
actual: format!("'{keycode}'"),
span: keybinding.keycode.span(),
})?,

View File

@ -16,7 +16,7 @@ use crate::{
use crossterm::cursor::SetCursorStyle;
use log::{error, trace, warn};
use miette::{ErrReport, IntoDiagnostic, Result};
use nu_cmd_base::{hook::eval_hook, util::get_editor};
use nu_cmd_base::util::get_editor;
use nu_color_config::StyleComputer;
#[allow(deprecated)]
use nu_engine::{convert_env_values, current_dir_str, env_to_strings};
@ -130,13 +130,8 @@ pub fn evaluate_repl(
// escape a few things because this says so
// https://code.visualstudio.com/docs/terminal/shell-integration#_vs-code-custom-sequences-osc-633-st
let cmd_text = line_editor.current_buffer_contents().to_string();
let len = cmd_text.len();
let mut cmd_text_chars = cmd_text[0..len].chars();
let mut replaced_cmd_text = String::with_capacity(len);
while let Some(c) = unescape_for_vscode(&mut cmd_text_chars) {
replaced_cmd_text.push(c);
}
let replaced_cmd_text = escape_special_vscode_bytes(&cmd_text)?;
run_shell_integration_osc633(
engine_state,
@ -220,26 +215,41 @@ pub fn evaluate_repl(
Ok(())
}
fn unescape_for_vscode(text: &mut std::str::Chars) -> Option<char> {
match text.next() {
Some('\\') => match text.next() {
Some('0') => Some('\x00'), // NUL '\0' (null character)
Some('a') => Some('\x07'), // BEL '\a' (bell)
Some('b') => Some('\x08'), // BS '\b' (backspace)
Some('t') => Some('\x09'), // HT '\t' (horizontal tab)
Some('n') => Some('\x0a'), // LF '\n' (new line)
Some('v') => Some('\x0b'), // VT '\v' (vertical tab)
Some('f') => Some('\x0c'), // FF '\f' (form feed)
Some('r') => Some('\x0d'), // CR '\r' (carriage ret)
Some(';') => Some('\x3b'), // semi-colon
Some('\\') => Some('\x5c'), // backslash
Some('e') => Some('\x1b'), // escape
Some(c) => Some(c),
None => None,
},
Some(c) => Some(c),
None => None,
}
fn escape_special_vscode_bytes(input: &str) -> Result<String, ShellError> {
let bytes = input
.chars()
.flat_map(|c| {
let mut buf = [0; 4]; // Buffer to hold UTF-8 bytes of the character
let c_bytes = c.encode_utf8(&mut buf); // Get UTF-8 bytes for the character
if c_bytes.len() == 1 {
let byte = c_bytes.as_bytes()[0];
match byte {
// Escape bytes below 0x20
b if b < 0x20 => format!("\\x{:02X}", byte).into_bytes(),
// Escape semicolon as \x3B
b';' => "\\x3B".to_string().into_bytes(),
// Escape backslash as \\
b'\\' => "\\\\".to_string().into_bytes(),
// Otherwise, return the character unchanged
_ => vec![byte],
}
} else {
// pass through multi-byte characters unchanged
c_bytes.bytes().collect()
}
})
.collect();
String::from_utf8(bytes).map_err(|err| ShellError::CantConvert {
to_type: "string".to_string(),
from_type: "bytes".to_string(),
span: Span::unknown(),
help: Some(format!(
"Error {err}, Unable to convert {input} to escaped bytes"
)),
})
}
fn get_line_editor(engine_state: &mut EngineState, use_color: bool) -> Result<Reedline> {
@ -296,9 +306,6 @@ fn loop_iteration(ctx: LoopContext) -> (bool, Stack, Reedline) {
if let Err(err) = engine_state.merge_env(&mut stack) {
report_shell_error(engine_state, &err);
}
// Check whether $env.NU_DISABLE_IR is set, so that the user can change it in the REPL
// Temporary while IR eval is optional
stack.use_ir = !stack.has_env_var(engine_state, "NU_DISABLE_IR");
perf!("merge env", start_time, use_color);
start_time = std::time::Instant::now();
@ -306,20 +313,26 @@ fn loop_iteration(ctx: LoopContext) -> (bool, Stack, Reedline) {
perf!("reset signals", start_time, use_color);
start_time = std::time::Instant::now();
// Right before we start our prompt and take input from the user,
// fire the "pre_prompt" hook
if let Some(hook) = engine_state.get_config().hooks.pre_prompt.clone() {
if let Err(err) = eval_hook(engine_state, &mut stack, None, vec![], &hook, "pre_prompt") {
report_shell_error(engine_state, &err);
}
// Right before we start our prompt and take input from the user, fire the "pre_prompt" hook
if let Err(err) = hook::eval_hooks(
engine_state,
&mut stack,
vec![],
&engine_state.get_config().hooks.pre_prompt.clone(),
"pre_prompt",
) {
report_shell_error(engine_state, &err);
}
perf!("pre-prompt hook", start_time, use_color);
start_time = std::time::Instant::now();
// Next, check all the environment variables they ask for
// fire the "env_change" hook
let env_change = engine_state.get_config().hooks.env_change.clone();
if let Err(error) = hook::eval_env_change_hook(env_change, engine_state, &mut stack) {
if let Err(error) = hook::eval_env_change_hook(
&engine_state.get_config().hooks.env_change.clone(),
engine_state,
&mut stack,
) {
report_shell_error(engine_state, &error)
}
perf!("env-change hook", start_time, use_color);
@ -504,18 +517,17 @@ fn loop_iteration(ctx: LoopContext) -> (bool, Stack, Reedline) {
// Right before we start running the code the user gave us, fire the `pre_execution`
// hook
if let Some(hook) = config.hooks.pre_execution.clone() {
{
// Set the REPL buffer to the current command for the "pre_execution" hook
let mut repl = engine_state.repl_state.lock().expect("repl state mutex");
repl.buffer = repl_cmd_line_text.to_string();
drop(repl);
if let Err(err) = eval_hook(
if let Err(err) = hook::eval_hooks(
engine_state,
&mut stack,
None,
vec![],
&hook,
&engine_state.get_config().hooks.pre_execution.clone(),
"pre_execution",
) {
report_shell_error(engine_state, &err);
@ -750,7 +762,7 @@ fn fill_in_result_related_history_metadata(
c.duration = Some(cmd_duration);
c.exit_status = stack
.get_env_var(engine_state, "LAST_EXIT_CODE")
.and_then(|e| e.as_i64().ok());
.and_then(|e| e.as_int().ok());
c
})
.into_diagnostic()?; // todo: don't stop repl if error here?
@ -1069,16 +1081,8 @@ fn run_shell_integration_osc633(
// escape a few things because this says so
// https://code.visualstudio.com/docs/terminal/shell-integration#_vs-code-custom-sequences-osc-633-st
let replaced_cmd_text: String = repl_cmd_line_text
.chars()
.map(|c| match c {
'\n' => '\x0a',
'\r' => '\x0d',
'\x1b' => '\x1b',
_ => c,
})
.collect();
let replaced_cmd_text =
escape_special_vscode_bytes(&repl_cmd_line_text).unwrap_or(repl_cmd_line_text);
//OSC 633 ; E ; <commandline> [; <nonce] ST - Explicitly set the command line with an optional nonce.
run_ansi_sequence(&format!(
@ -1245,7 +1249,7 @@ fn get_command_finished_marker(
) -> String {
let exit_code = stack
.get_env_var(engine_state, "LAST_EXIT_CODE")
.and_then(|e| e.as_i64().ok());
.and_then(|e| e.as_int().ok());
if shell_integration_osc633 {
if stack
@ -1356,10 +1360,9 @@ fn run_finaliziation_ansi_sequence(
// Absolute paths with a drive letter, like 'C:', 'D:\', 'E:\foo'
#[cfg(windows)]
static DRIVE_PATH_REGEX: once_cell::sync::Lazy<fancy_regex::Regex> =
once_cell::sync::Lazy::new(|| {
fancy_regex::Regex::new(r"^[a-zA-Z]:[/\\]?").expect("Internal error: regex creation")
});
static DRIVE_PATH_REGEX: std::sync::LazyLock<fancy_regex::Regex> = std::sync::LazyLock::new(|| {
fancy_regex::Regex::new(r"^[a-zA-Z]:[/\\]?").expect("Internal error: regex creation")
});
// A best-effort "does this string look kinda like a path?" function to determine whether to auto-cd
fn looks_like_path(orig: &str) -> bool {
@ -1421,7 +1424,7 @@ fn are_session_ids_in_sync() {
#[cfg(test)]
mod test_auto_cd {
use super::{do_auto_cd, parse_operation, ReplOperation};
use super::{do_auto_cd, escape_special_vscode_bytes, parse_operation, ReplOperation};
use nu_path::AbsolutePath;
use nu_protocol::engine::{EngineState, Stack};
use tempfile::tempdir;
@ -1571,4 +1574,43 @@ mod test_auto_cd {
let input = if cfg!(windows) { r"foo\" } else { "foo/" };
check(tempdir, input, dir);
}
#[test]
fn escape_vscode_semicolon_test() {
let input = r#"now;is"#;
let expected = r#"now\x3Bis"#;
let actual = escape_special_vscode_bytes(input).unwrap();
assert_eq!(expected, actual);
}
#[test]
fn escape_vscode_backslash_test() {
let input = r#"now\is"#;
let expected = r#"now\\is"#;
let actual = escape_special_vscode_bytes(input).unwrap();
assert_eq!(expected, actual);
}
#[test]
fn escape_vscode_linefeed_test() {
let input = "now\nis";
let expected = r#"now\x0Ais"#;
let actual = escape_special_vscode_bytes(input).unwrap();
assert_eq!(expected, actual);
}
#[test]
fn escape_vscode_tab_null_cr_test() {
let input = "now\t\0\ris";
let expected = r#"now\x09\x00\x0Dis"#;
let actual = escape_special_vscode_bytes(input).unwrap();
assert_eq!(expected, actual);
}
#[test]
fn escape_vscode_multibyte_ok() {
let input = "now🍪is";
let actual = escape_special_vscode_bytes(input).unwrap();
assert_eq!(input, actual);
}
}

View File

@ -144,8 +144,6 @@ impl Highlighter for NuHighlighter {
}
FlatShape::Flag => add_colored_token(&shape.1, next_token),
FlatShape::Pipe => add_colored_token(&shape.1, next_token),
FlatShape::And => add_colored_token(&shape.1, next_token),
FlatShape::Or => add_colored_token(&shape.1, next_token),
FlatShape::Redirection => add_colored_token(&shape.1, next_token),
FlatShape::Custom(..) => add_colored_token(&shape.1, next_token),
FlatShape::MatchPattern => add_colored_token(&shape.1, next_token),

View File

@ -1,6 +1,8 @@
#![allow(clippy::byte_char_slices)]
use nu_cmd_base::hook::eval_hook;
use nu_engine::{eval_block, eval_block_with_early_return};
use nu_parser::{escape_quote_string, lex, parse, unescape_unquote_string, Token, TokenContents};
use nu_parser::{lex, parse, unescape_unquote_string, Token, TokenContents};
use nu_protocol::{
cli_error::report_compile_error,
debugger::WithoutDebug,
@ -10,7 +12,7 @@ use nu_protocol::{
};
#[cfg(windows)]
use nu_utils::enable_vt_processing;
use nu_utils::perf;
use nu_utils::{escape_quote_string, perf};
use std::path::Path;
// This will collect environment variables from std::env and adds them to a stack.
@ -201,6 +203,35 @@ fn gather_env_vars(
}
}
/// Print a pipeline with formatting applied based on display_output hook.
///
/// This function should be preferred when printing values resulting from a completed evaluation.
/// For values printed as part of a command's execution, such as values printed by the `print` command,
/// the `PipelineData::print_table` function should be preferred instead as it is not config-dependent.
///
/// `no_newline` controls if we need to attach newline character to output.
pub fn print_pipeline(
engine_state: &mut EngineState,
stack: &mut Stack,
pipeline: PipelineData,
no_newline: bool,
) -> Result<(), ShellError> {
if let Some(hook) = engine_state.get_config().hooks.display_output.clone() {
let pipeline = eval_hook(
engine_state,
stack,
Some(pipeline),
vec![],
&hook,
"display_output",
)?;
pipeline.print_raw(engine_state, no_newline, false)
} else {
// if display_output isn't set, we should still prefer to print with some formatting
pipeline.print_table(engine_state, stack, no_newline, false)
}
}
pub fn eval_source(
engine_state: &mut EngineState,
stack: &mut Stack,
@ -267,7 +298,7 @@ fn evaluate_source(
if let Some(err) = working_set.compile_errors.first() {
report_compile_error(&working_set, err);
// Not a fatal error, for now
return Ok(true);
}
(output, working_set.render())
@ -281,21 +312,8 @@ fn evaluate_source(
eval_block::<WithoutDebug>(engine_state, stack, &block, input)
}?;
if let PipelineData::ByteStream(..) = pipeline {
pipeline.print(engine_state, stack, false, false)
} else if let Some(hook) = engine_state.get_config().hooks.display_output.clone() {
let pipeline = eval_hook(
engine_state,
stack,
Some(pipeline),
vec![],
&hook,
"display_output",
)?;
pipeline.print(engine_state, stack, false, false)
} else {
pipeline.print(engine_state, stack, true, false)
}?;
let no_newline = matches!(&pipeline, &PipelineData::ByteStream(..));
print_pipeline(engine_state, stack, pipeline, no_newline)?;
Ok(false)
}

View File

@ -0,0 +1,296 @@
use nu_protocol::HistoryFileFormat;
use nu_test_support::{nu, Outcome};
use reedline::{
FileBackedHistory, History, HistoryItem, HistoryItemId, ReedlineError, SearchQuery,
SqliteBackedHistory,
};
use rstest::rstest;
use tempfile::TempDir;
struct Test {
cfg_dir: TempDir,
}
impl Test {
fn new(history_format: &'static str) -> Self {
let cfg_dir = tempfile::Builder::new()
.prefix("history_import_test")
.tempdir()
.unwrap();
// Assigning to $env.config.history.file_format seems to work only in startup
// configuration.
std::fs::write(
cfg_dir.path().join("env.nu"),
format!("$env.config.history.file_format = {history_format:?}"),
)
.unwrap();
Self { cfg_dir }
}
fn nu(&self, cmd: impl AsRef<str>) -> Outcome {
let env = [(
"XDG_CONFIG_HOME".to_string(),
self.cfg_dir.path().to_str().unwrap().to_string(),
)];
let env_config = self.cfg_dir.path().join("env.nu");
nu!(envs: env, env_config: env_config, cmd.as_ref())
}
fn open_plaintext(&self) -> Result<FileBackedHistory, ReedlineError> {
FileBackedHistory::with_file(
100,
self.cfg_dir
.path()
.join("nushell")
.join(HistoryFileFormat::Plaintext.default_file_name()),
)
}
fn open_sqlite(&self) -> Result<SqliteBackedHistory, ReedlineError> {
SqliteBackedHistory::with_file(
self.cfg_dir
.path()
.join("nushell")
.join(HistoryFileFormat::Sqlite.default_file_name()),
None,
None,
)
}
fn open_backend(&self, format: HistoryFileFormat) -> Result<Box<dyn History>, ReedlineError> {
fn boxed(be: impl History + 'static) -> Box<dyn History> {
Box::new(be)
}
use HistoryFileFormat::*;
match format {
Plaintext => self.open_plaintext().map(boxed),
Sqlite => self.open_sqlite().map(boxed),
}
}
}
enum HistorySource {
Vec(Vec<HistoryItem>),
Command(&'static str),
}
struct TestCase {
dst_format: HistoryFileFormat,
dst_history: Vec<HistoryItem>,
src_history: HistorySource,
want_history: Vec<HistoryItem>,
}
const EMPTY_TEST_CASE: TestCase = TestCase {
dst_format: HistoryFileFormat::Plaintext,
dst_history: Vec::new(),
src_history: HistorySource::Vec(Vec::new()),
want_history: Vec::new(),
};
impl TestCase {
fn run(self) {
use HistoryFileFormat::*;
let test = Test::new(match self.dst_format {
Plaintext => "plaintext",
Sqlite => "sqlite",
});
save_all(
&mut *test.open_backend(self.dst_format).unwrap(),
self.dst_history,
)
.unwrap();
let outcome = match self.src_history {
HistorySource::Vec(src_history) => {
let src_format = match self.dst_format {
Plaintext => Sqlite,
Sqlite => Plaintext,
};
save_all(&mut *test.open_backend(src_format).unwrap(), src_history).unwrap();
test.nu("history import")
}
HistorySource::Command(cmd) => {
let mut cmd = cmd.to_string();
cmd.push_str(" | history import");
test.nu(cmd)
}
};
assert!(outcome.status.success());
let got = query_all(&*test.open_backend(self.dst_format).unwrap()).unwrap();
// Compare just the commands first, for readability.
fn commands_only(items: &[HistoryItem]) -> Vec<&str> {
items
.iter()
.map(|item| item.command_line.as_str())
.collect()
}
assert_eq!(commands_only(&got), commands_only(&self.want_history));
// If commands match, compare full items.
assert_eq!(got, self.want_history);
}
}
fn query_all(history: &dyn History) -> Result<Vec<HistoryItem>, ReedlineError> {
history.search(SearchQuery::everything(
reedline::SearchDirection::Forward,
None,
))
}
fn save_all(history: &mut dyn History, items: Vec<HistoryItem>) -> Result<(), ReedlineError> {
for item in items {
history.save(item)?;
}
Ok(())
}
const EMPTY_ITEM: HistoryItem = HistoryItem {
command_line: String::new(),
id: None,
start_timestamp: None,
session_id: None,
hostname: None,
cwd: None,
duration: None,
exit_status: None,
more_info: None,
};
#[test]
fn history_import_pipe_string() {
TestCase {
dst_format: HistoryFileFormat::Plaintext,
src_history: HistorySource::Command("echo bar"),
want_history: vec![HistoryItem {
id: Some(HistoryItemId::new(0)),
command_line: "bar".to_string(),
..EMPTY_ITEM
}],
..EMPTY_TEST_CASE
}
.run();
}
#[test]
fn history_import_pipe_record() {
TestCase {
dst_format: HistoryFileFormat::Sqlite,
src_history: HistorySource::Command("[[cwd command]; [/tmp some_command]]"),
want_history: vec![HistoryItem {
id: Some(HistoryItemId::new(1)),
command_line: "some_command".to_string(),
cwd: Some("/tmp".to_string()),
..EMPTY_ITEM
}],
..EMPTY_TEST_CASE
}
.run();
}
#[test]
fn to_empty_plaintext() {
TestCase {
dst_format: HistoryFileFormat::Plaintext,
src_history: HistorySource::Vec(vec![
HistoryItem {
command_line: "foo".to_string(),
..EMPTY_ITEM
},
HistoryItem {
command_line: "bar".to_string(),
..EMPTY_ITEM
},
]),
want_history: vec![
HistoryItem {
id: Some(HistoryItemId::new(0)),
command_line: "foo".to_string(),
..EMPTY_ITEM
},
HistoryItem {
id: Some(HistoryItemId::new(1)),
command_line: "bar".to_string(),
..EMPTY_ITEM
},
],
..EMPTY_TEST_CASE
}
.run()
}
#[test]
fn to_empty_sqlite() {
TestCase {
dst_format: HistoryFileFormat::Sqlite,
src_history: HistorySource::Vec(vec![
HistoryItem {
command_line: "foo".to_string(),
..EMPTY_ITEM
},
HistoryItem {
command_line: "bar".to_string(),
..EMPTY_ITEM
},
]),
want_history: vec![
HistoryItem {
id: Some(HistoryItemId::new(1)),
command_line: "foo".to_string(),
..EMPTY_ITEM
},
HistoryItem {
id: Some(HistoryItemId::new(2)),
command_line: "bar".to_string(),
..EMPTY_ITEM
},
],
..EMPTY_TEST_CASE
}
.run()
}
#[rstest]
#[case::plaintext(HistoryFileFormat::Plaintext)]
#[case::sqlite(HistoryFileFormat::Sqlite)]
fn to_existing(#[case] dst_format: HistoryFileFormat) {
TestCase {
dst_format,
dst_history: vec![
HistoryItem {
id: Some(HistoryItemId::new(0)),
command_line: "original-1".to_string(),
..EMPTY_ITEM
},
HistoryItem {
id: Some(HistoryItemId::new(1)),
command_line: "original-2".to_string(),
..EMPTY_ITEM
},
],
src_history: HistorySource::Vec(vec![HistoryItem {
id: Some(HistoryItemId::new(1)),
command_line: "new".to_string(),
..EMPTY_ITEM
}]),
want_history: vec![
HistoryItem {
id: Some(HistoryItemId::new(0)),
command_line: "original-1".to_string(),
..EMPTY_ITEM
},
HistoryItem {
id: Some(HistoryItemId::new(1)),
command_line: "original-2".to_string(),
..EMPTY_ITEM
},
HistoryItem {
id: Some(HistoryItemId::new(2)),
command_line: "new".to_string(),
..EMPTY_ITEM
},
],
}
.run()
}

View File

@ -1,2 +1,3 @@
mod history_import;
mod keybindings_list;
mod nu_highlight;

View File

@ -88,6 +88,27 @@ fn completer_strings_with_options() -> NuCompleter {
NuCompleter::new(Arc::new(engine), Arc::new(stack))
}
#[fixture]
fn completer_strings_no_sort() -> NuCompleter {
// Create a new engine
let (_, _, mut engine, mut stack) = new_engine();
let command = r#"
def animals [] {
{
completions: ["zzzfoo", "foo", "not matched", "abcfoo" ],
options: {
completion_algorithm: "fuzzy",
sort: false,
}
}
}
def my-command [animal: string@animals] { print $animal }"#;
assert!(support::merge_input(command.as_bytes(), &mut engine, &mut stack).is_ok());
// Instantiate a new completer
NuCompleter::new(Arc::new(engine), Arc::new(stack))
}
#[fixture]
fn custom_completer() -> NuCompleter {
// Create a new engine
@ -210,6 +231,13 @@ fn customcompletions_case_insensitive(mut completer_strings_with_options: NuComp
match_suggestions(&expected, &suggestions);
}
#[rstest]
fn customcompletions_no_sort(mut completer_strings_no_sort: NuCompleter) {
let suggestions = completer_strings_no_sort.complete("my-command foo", 14);
let expected: Vec<String> = vec!["zzzfoo".into(), "foo".into(), "abcfoo".into()];
match_suggestions(&expected, &suggestions);
}
#[test]
fn dotnu_completions() {
// Create a new engine
@ -329,6 +357,39 @@ fn file_completions() {
// Match the results
match_suggestions(&expected_paths, &suggestions);
// Test completions for the current folder even with parts before the autocomplet
let target_dir = format!("cp somefile.txt {dir_str}{MAIN_SEPARATOR}");
let suggestions = completer.complete(&target_dir, target_dir.len());
// Create the expected values
let expected_paths: Vec<String> = vec![
folder(dir.join("another")),
file(dir.join("custom_completion.nu")),
folder(dir.join("directory_completion")),
file(dir.join("nushell")),
folder(dir.join("test_a")),
folder(dir.join("test_b")),
file(dir.join(".hidden_file")),
folder(dir.join(".hidden_folder")),
];
#[cfg(windows)]
{
let separator = '/';
let target_dir = format!("cp somefile.txt {dir_str}{separator}");
let slash_suggestions = completer.complete(&target_dir, target_dir.len());
let expected_slash_paths: Vec<String> = expected_paths
.iter()
.map(|s| s.replace('\\', "/"))
.collect();
match_suggestions(&expected_slash_paths, &slash_suggestions);
}
// Match the results
match_suggestions(&expected_paths, &suggestions);
// Test completions for a file
let target_dir = format!("cp {}", folder(dir.join("another")));
let suggestions = completer.complete(&target_dir, target_dir.len());
@ -363,6 +424,75 @@ fn file_completions() {
match_suggestions(&expected_paths, &suggestions);
}
#[test]
fn custom_command_rest_any_args_file_completions() {
// Create a new engine
let (dir, dir_str, mut engine, mut stack) = new_engine();
let command = r#"def list [ ...args: any ] {}"#;
assert!(support::merge_input(command.as_bytes(), &mut engine, &mut stack).is_ok());
// Instantiate a new completer
let mut completer = NuCompleter::new(Arc::new(engine), Arc::new(stack));
// Test completions for the current folder
let target_dir = format!("list {dir_str}{MAIN_SEPARATOR}");
let suggestions = completer.complete(&target_dir, target_dir.len());
// Create the expected values
let expected_paths: Vec<String> = vec![
folder(dir.join("another")),
file(dir.join("custom_completion.nu")),
folder(dir.join("directory_completion")),
file(dir.join("nushell")),
folder(dir.join("test_a")),
folder(dir.join("test_b")),
file(dir.join(".hidden_file")),
folder(dir.join(".hidden_folder")),
];
// Match the results
match_suggestions(&expected_paths, &suggestions);
// Test completions for the current folder even with parts before the autocomplet
let target_dir = format!("list somefile.txt {dir_str}{MAIN_SEPARATOR}");
let suggestions = completer.complete(&target_dir, target_dir.len());
// Create the expected values
let expected_paths: Vec<String> = vec![
folder(dir.join("another")),
file(dir.join("custom_completion.nu")),
folder(dir.join("directory_completion")),
file(dir.join("nushell")),
folder(dir.join("test_a")),
folder(dir.join("test_b")),
file(dir.join(".hidden_file")),
folder(dir.join(".hidden_folder")),
];
// Match the results
match_suggestions(&expected_paths, &suggestions);
// Test completions for a file
let target_dir = format!("list {}", folder(dir.join("another")));
let suggestions = completer.complete(&target_dir, target_dir.len());
// Create the expected values
let expected_paths: Vec<String> = vec![file(dir.join("another").join("newfile"))];
// Match the results
match_suggestions(&expected_paths, &suggestions);
// Test completions for hidden files
let target_dir = format!("list {}", file(dir.join(".hidden_folder").join(".")));
let suggestions = completer.complete(&target_dir, target_dir.len());
let expected_paths: Vec<String> =
vec![file(dir.join(".hidden_folder").join(".hidden_subfile"))];
// Match the results
match_suggestions(&expected_paths, &suggestions);
}
#[cfg(windows)]
#[test]
fn file_completions_with_mixed_separators() {
@ -890,8 +1020,8 @@ fn subcommand_completions(mut subcommand_completer: NuCompleter) {
match_suggestions(
&vec![
"foo bar".to_string(),
"foo aabcrr".to_string(),
"foo abaz".to_string(),
"foo aabcrr".to_string(),
],
&suggestions,
);
@ -955,8 +1085,8 @@ fn flag_completions() {
"--mime-type".into(),
"--short-names".into(),
"--threads".into(),
"-D".into(),
"-a".into(),
"-D".into(),
"-d".into(),
"-f".into(),
"-h".into(),
@ -1287,7 +1417,7 @@ fn variables_completions() {
assert_eq!(3, suggestions.len());
#[cfg(windows)]
let expected: Vec<String> = vec!["PWD".into(), "Path".into(), "TEST".into()];
let expected: Vec<String> = vec!["Path".into(), "PWD".into(), "TEST".into()];
#[cfg(not(windows))]
let expected: Vec<String> = vec!["PATH".into(), "PWD".into(), "TEST".into()];
@ -1576,6 +1706,23 @@ fn sort_fuzzy_completions_in_alphabetical_order(mut fuzzy_alpha_sort_completer:
);
}
#[test]
fn exact_match() {
let (dir, _, engine, stack) = new_partial_engine();
let mut completer = NuCompleter::new(Arc::new(engine), Arc::new(stack));
let target_dir = format!("open {}", folder(dir.join("pArTiAL")));
let suggestions = completer.complete(&target_dir, target_dir.len());
// Since it's an exact match, only 'partial' should be suggested, not
// 'partial-a' and stuff. Implemented in #13302
match_suggestions(
&vec![file(dir.join("partial").join("hello.txt"))],
&suggestions,
);
}
#[ignore = "was reverted, still needs fixing"]
#[rstest]
fn alias_offset_bug_7648() {
@ -1612,13 +1759,3 @@ fn alias_offset_bug_7754() {
// This crashes before PR #7756
let _suggestions = completer.complete("ll -a | c", 9);
}
#[test]
fn get_path_env_var_8003() {
// Create a new engine
let (_, _, engine, _) = new_engine();
// Get the path env var in a platform agnostic way
let the_path = engine.get_path_env_var();
// Make sure it's not empty
assert!(the_path.is_some());
}

View File

@ -5,7 +5,7 @@ edition = "2021"
license = "MIT"
name = "nu-cmd-base"
repository = "https://github.com/nushell/nushell/tree/main/crates/nu-cmd-base"
version = "0.99.1"
version = "0.101.0"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
@ -13,10 +13,10 @@ version = "0.99.1"
workspace = true
[dependencies]
nu-engine = { path = "../nu-engine", version = "0.99.1" }
nu-parser = { path = "../nu-parser", version = "0.99.1" }
nu-path = { path = "../nu-path", version = "0.99.1" }
nu-protocol = { path = "../nu-protocol", version = "0.99.1" }
nu-engine = { path = "../nu-engine", version = "0.101.0", default-features = false }
nu-parser = { path = "../nu-parser", version = "0.101.0" }
nu-path = { path = "../nu-path", version = "0.101.0" }
nu-protocol = { path = "../nu-protocol", version = "0.101.0", default-features = false }
indexmap = { workspace = true }
miette = { workspace = true }

View File

@ -7,49 +7,55 @@ use nu_protocol::{
engine::{Closure, EngineState, Stack, StateWorkingSet},
PipelineData, PositionalArg, ShellError, Span, Type, Value, VarId,
};
use std::sync::Arc;
use std::{collections::HashMap, sync::Arc};
pub fn eval_env_change_hook(
env_change_hook: Option<Value>,
env_change_hook: &HashMap<String, Vec<Value>>,
engine_state: &mut EngineState,
stack: &mut Stack,
) -> Result<(), ShellError> {
if let Some(hook) = env_change_hook {
match hook {
Value::Record { val, .. } => {
for (env_name, hook_value) in &*val {
let before = engine_state.previous_env_vars.get(env_name);
let after = stack.get_env_var(engine_state, env_name);
if before != after {
let before = before.cloned().unwrap_or_default();
let after = after.cloned().unwrap_or_default();
for (env, hooks) in env_change_hook {
let before = engine_state.previous_env_vars.get(env);
let after = stack.get_env_var(engine_state, env);
if before != after {
let before = before.cloned().unwrap_or_default();
let after = after.cloned().unwrap_or_default();
eval_hook(
engine_state,
stack,
None,
vec![("$before".into(), before), ("$after".into(), after.clone())],
hook_value,
"env_change",
)?;
eval_hooks(
engine_state,
stack,
vec![("$before".into(), before), ("$after".into(), after.clone())],
hooks,
"env_change",
)?;
Arc::make_mut(&mut engine_state.previous_env_vars)
.insert(env_name.clone(), after);
}
}
}
x => {
return Err(ShellError::TypeMismatch {
err_message: "record for the 'env_change' hook".to_string(),
span: x.span(),
});
}
Arc::make_mut(&mut engine_state.previous_env_vars).insert(env.clone(), after);
}
}
Ok(())
}
pub fn eval_hooks(
engine_state: &mut EngineState,
stack: &mut Stack,
arguments: Vec<(String, Value)>,
hooks: &[Value],
hook_name: &str,
) -> Result<(), ShellError> {
for hook in hooks {
eval_hook(
engine_state,
stack,
None,
arguments.clone(),
hook,
&format!("{hook_name} list, recursive"),
)?;
}
Ok(())
}
pub fn eval_hook(
engine_state: &mut EngineState,
stack: &mut Stack,
@ -127,16 +133,7 @@ pub fn eval_hook(
}
}
Value::List { vals, .. } => {
for val in vals {
eval_hook(
engine_state,
stack,
None,
arguments.clone(),
val,
&format!("{hook_name} list, recursive"),
)?;
}
eval_hooks(engine_state, stack, arguments, vals, hook_name)?;
}
Value::Record { val, .. } => {
// Hooks can optionally be a record in this form:

View File

@ -78,10 +78,10 @@ pub fn get_editor(
get_editor_commandline(&config.buffer_editor, "$env.config.buffer_editor")
{
Ok(buff_editor)
} else if let Some(value) = env_vars.get("EDITOR") {
get_editor_commandline(value, "$env.EDITOR")
} else if let Some(value) = env_vars.get("VISUAL") {
get_editor_commandline(value, "$env.VISUAL")
} else if let Some(value) = env_vars.get("EDITOR") {
get_editor_commandline(value, "$env.EDITOR")
} else {
Err(ShellError::GenericError {
error: "No editor configured".into(),

View File

@ -5,7 +5,7 @@ edition = "2021"
license = "MIT"
name = "nu-cmd-extra"
repository = "https://github.com/nushell/nushell/tree/main/crates/nu-cmd-extra"
version = "0.99.1"
version = "0.101.0"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
@ -16,13 +16,13 @@ bench = false
workspace = true
[dependencies]
nu-cmd-base = { path = "../nu-cmd-base", version = "0.99.1" }
nu-engine = { path = "../nu-engine", version = "0.99.1" }
nu-json = { version = "0.99.1", path = "../nu-json" }
nu-parser = { path = "../nu-parser", version = "0.99.1" }
nu-pretty-hex = { version = "0.99.1", path = "../nu-pretty-hex" }
nu-protocol = { path = "../nu-protocol", version = "0.99.1" }
nu-utils = { path = "../nu-utils", version = "0.99.1" }
nu-cmd-base = { path = "../nu-cmd-base", version = "0.101.0" }
nu-engine = { path = "../nu-engine", version = "0.101.0", default-features = false }
nu-json = { version = "0.101.0", path = "../nu-json" }
nu-parser = { path = "../nu-parser", version = "0.101.0" }
nu-pretty-hex = { version = "0.101.0", path = "../nu-pretty-hex" }
nu-protocol = { path = "../nu-protocol", version = "0.101.0", default-features = false }
nu-utils = { path = "../nu-utils", version = "0.101.0", default-features = false }
# Potential dependencies for extras
heck = { workspace = true }
@ -36,6 +36,6 @@ v_htmlescape = { workspace = true }
itertools = { workspace = true }
[dev-dependencies]
nu-cmd-lang = { path = "../nu-cmd-lang", version = "0.99.1" }
nu-command = { path = "../nu-command", version = "0.99.1" }
nu-test-support = { path = "../nu-test-support", version = "0.99.1" }
nu-cmd-lang = { path = "../nu-cmd-lang", version = "0.101.0" }
nu-command = { path = "../nu-command", version = "0.101.0" }
nu-test-support = { path = "../nu-test-support", version = "0.101.0" }

View File

@ -203,7 +203,7 @@ pub fn action(input: &Value, _args: &Arguments, span: Span) -> Value {
Value::string(raw_string.trim(), span)
}
Value::Int { val, .. } => convert_to_smallest_number_type(*val, span),
Value::Filesize { val, .. } => convert_to_smallest_number_type(*val, span),
Value::Filesize { val, .. } => convert_to_smallest_number_type(val.get(), span),
Value::Duration { val, .. } => convert_to_smallest_number_type(*val, span),
Value::String { val, .. } => {
let raw_bytes = val.as_bytes();

View File

@ -66,7 +66,7 @@ fn action(input: &Value, _args: &CellPathOnlyArgs, span: Span) -> Value {
match input {
Value::Float { val, .. } => fmt_it_64(*val, span),
Value::Int { val, .. } => fmt_it(*val, span),
Value::Filesize { val, .. } => fmt_it(*val, span),
Value::Filesize { val, .. } => fmt_it(val.get(), span),
// Propagate errors by explicitly matching them before the final case.
Value::Error { .. } => input.clone(),
other => Value::error(

View File

@ -25,7 +25,7 @@ impl Command for EachWhile {
)])
.required(
"closure",
SyntaxShape::Closure(Some(vec![SyntaxShape::Any, SyntaxShape::Int])),
SyntaxShape::Closure(Some(vec![SyntaxShape::Any])),
"the closure to run",
)
.category(Category::Filters)

View File

@ -2,4 +2,4 @@ mod from;
mod to;
pub(crate) use from::url::FromUrl;
pub(crate) use to::html::ToHtml;
pub use to::html::ToHtml;

View File

@ -9,6 +9,7 @@ mod strings;
pub use bits::{
Bits, BitsAnd, BitsInto, BitsNot, BitsOr, BitsRol, BitsRor, BitsShl, BitsShr, BitsXor,
};
pub use formats::ToHtml;
pub use math::{MathArcCos, MathArcCosH, MathArcSin, MathArcSinH, MathArcTan, MathArcTanH};
pub use math::{MathCos, MathCosH, MathSin, MathSinH, MathTan, MathTanH};
pub use math::{MathExp, MathLn};
@ -54,7 +55,8 @@ pub fn add_extra_command_context(mut engine_state: EngineState) -> EngineState {
strings::str_::case::StrTitleCase
);
bind_command!(formats::ToHtml, formats::FromUrl);
bind_command!(ToHtml, formats::FromUrl);
// Bits
bind_command! {
Bits,

View File

@ -6,7 +6,7 @@ repository = "https://github.com/nushell/nushell/tree/main/crates/nu-cmd-lang"
edition = "2021"
license = "MIT"
name = "nu-cmd-lang"
version = "0.99.1"
version = "0.101.0"
[lib]
bench = false
@ -15,18 +15,29 @@ bench = false
workspace = true
[dependencies]
nu-engine = { path = "../nu-engine", version = "0.99.1" }
nu-parser = { path = "../nu-parser", version = "0.99.1" }
nu-protocol = { path = "../nu-protocol", version = "0.99.1" }
nu-utils = { path = "../nu-utils", version = "0.99.1" }
nu-engine = { path = "../nu-engine", version = "0.101.0", default-features = false }
nu-parser = { path = "../nu-parser", version = "0.101.0" }
nu-protocol = { path = "../nu-protocol", version = "0.101.0", default-features = false }
nu-utils = { path = "../nu-utils", version = "0.101.0", default-features = false }
itertools = { workspace = true }
shadow-rs = { version = "0.35", default-features = false }
shadow-rs = { version = "0.37", default-features = false }
[build-dependencies]
shadow-rs = { version = "0.35", default-features = false }
shadow-rs = { version = "0.37", default-features = false }
[features]
default = ["os"]
os = [
"nu-engine/os",
"nu-protocol/os",
"nu-utils/os",
]
plugin = [
"nu-protocol/plugin",
"os",
]
mimalloc = []
trash-support = []
sqlite = []

View File

@ -1,12 +1,13 @@
use std::process::Command;
fn main() -> shadow_rs::SdResult<()> {
fn main() {
// Look up the current Git commit ourselves instead of relying on shadow_rs,
// because shadow_rs does it in a really slow-to-compile way (it builds libgit2)
let hash = get_git_hash().unwrap_or_default();
println!("cargo:rustc-env=NU_COMMIT_HASH={hash}");
shadow_rs::new()
shadow_rs::ShadowBuilder::builder()
.build()
.expect("shadow builder build should success");
}
fn get_git_hash() -> Option<String> {

View File

@ -169,6 +169,7 @@ fn run(
let origin = match stream.source() {
ByteStreamSource::Read(_) => "unknown",
ByteStreamSource::File(_) => "file",
#[cfg(feature = "os")]
ByteStreamSource::Child(_) => "external",
};

View File

@ -1,9 +1,8 @@
use nu_engine::{command_prelude::*, get_eval_block_with_early_return, redirect_env};
use nu_protocol::{
engine::Closure,
process::{ChildPipe, ChildProcess},
ByteStream, ByteStreamSource, OutDest,
};
#[cfg(feature = "os")]
use nu_protocol::process::{ChildPipe, ChildProcess};
use nu_protocol::{engine::Closure, ByteStream, ByteStreamSource, OutDest};
use std::{
io::{Cursor, Read},
thread,
@ -69,6 +68,33 @@ impl Command for Do {
let block: Closure = call.req(engine_state, caller_stack, 0)?;
let rest: Vec<Value> = call.rest(engine_state, caller_stack, 1)?;
let ignore_all_errors = call.has_flag(engine_state, caller_stack, "ignore-errors")?;
if call.has_flag(engine_state, caller_stack, "ignore-shell-errors")? {
nu_protocol::report_shell_warning(
engine_state,
&ShellError::GenericError {
error: "Deprecated option".into(),
msg: "`--ignore-shell-errors` is deprecated and will be removed in 0.102.0."
.into(),
span: Some(call.head),
help: Some("Please use the `--ignore-errors(-i)`".into()),
inner: vec![],
},
);
}
if call.has_flag(engine_state, caller_stack, "ignore-program-errors")? {
nu_protocol::report_shell_warning(
engine_state,
&ShellError::GenericError {
error: "Deprecated option".into(),
msg: "`--ignore-program-errors` is deprecated and will be removed in 0.102.0."
.into(),
span: Some(call.head),
help: Some("Please use the `--ignore-errors(-i)`".into()),
inner: vec![],
},
);
}
let ignore_shell_errors = ignore_all_errors
|| call.has_flag(engine_state, caller_stack, "ignore-shell-errors")?;
let ignore_program_errors = ignore_all_errors
@ -82,9 +108,6 @@ impl Command for Do {
bind_args_to(&mut callee_stack, &block.signature, rest, head)?;
let eval_block_with_early_return = get_eval_block_with_early_return(engine_state);
// Applies to all block evaluation once set true
callee_stack.use_ir = !caller_stack.has_env_var(engine_state, "NU_DISABLE_IR");
let result = eval_block_with_early_return(engine_state, &mut callee_stack, block, input);
if has_env {
@ -95,6 +118,13 @@ impl Command for Do {
match result {
Ok(PipelineData::ByteStream(stream, metadata)) if capture_errors => {
let span = stream.span();
#[cfg(not(feature = "os"))]
return Err(ShellError::DisabledOsSupport {
msg: "Cannot create a thread to receive stdout message.".to_string(),
span: Some(span),
});
#[cfg(feature = "os")]
match stream.into_child() {
Ok(mut child) => {
// Use a thread to receive stdout message.
@ -172,6 +202,7 @@ impl Command for Do {
OutDest::Pipe | OutDest::PipeSeparate | OutDest::Value
) =>
{
#[cfg(feature = "os")]
if let ByteStreamSource::Child(child) = stream.source_mut() {
child.ignore_error(true);
}
@ -211,16 +242,6 @@ impl Command for Do {
example: r#"do --ignore-errors { thisisnotarealcommand }"#,
result: None,
},
Example {
description: "Run the closure and ignore shell errors",
example: r#"do --ignore-shell-errors { thisisnotarealcommand }"#,
result: None,
},
Example {
description: "Run the closure and ignore external program errors",
example: r#"do --ignore-program-errors { nu --commands 'exit 1' }; echo "I'll still run""#,
result: None,
},
Example {
description: "Abort the pipeline if a program returns a non-zero exit code",
example: r#"do --capture-errors { nu --commands 'exit 1' } | myscarycommand"#,

View File

@ -35,6 +35,7 @@ impl Command for Ignore {
mut input: PipelineData,
) -> Result<PipelineData, ShellError> {
if let PipelineData::ByteStream(stream, _) = &mut input {
#[cfg(feature = "os")]
if let ByteStreamSource::Child(child) = stream.source_mut() {
child.ignore_error(true);
}

View File

@ -107,7 +107,7 @@ fn run_catch(
if let Some(catch) = catch {
stack.set_last_error(&error);
let error = error.into_value(span);
let error = error.into_value(&StateWorkingSet::new(engine_state), span);
let block = engine_state.get_block(catch.block_id);
// Put the error value in the positional closure var
if let Some(var) = block.signature.get_positional(0) {

View File

@ -98,15 +98,21 @@ This command is a parser keyword. For details, check:
engine_state.get_span_contents(import_pattern.head.span),
);
let maybe_file_path = find_in_dirs_env(
let maybe_file_path_or_dir = find_in_dirs_env(
&module_arg_str,
engine_state,
caller_stack,
get_dirs_var_from_call(caller_stack, call),
)?;
let maybe_parent = maybe_file_path
.as_ref()
.and_then(|path| path.parent().map(|p| p.to_path_buf()));
// module_arg_str maybe a directory, in this case
// find_in_dirs_env returns a directory.
let maybe_parent = maybe_file_path_or_dir.as_ref().and_then(|path| {
if path.is_dir() {
Some(path.to_path_buf())
} else {
path.parent().map(|p| p.to_path_buf())
}
});
let mut callee_stack = caller_stack
.gather_captures(engine_state, &block.captures)
@ -118,9 +124,15 @@ This command is a parser keyword. For details, check:
callee_stack.add_env_var("FILE_PWD".to_string(), file_pwd);
}
if let Some(file_path) = maybe_file_path {
let file_path = Value::string(file_path.to_string_lossy(), call.head);
callee_stack.add_env_var("CURRENT_FILE".to_string(), file_path);
if let Some(path) = maybe_file_path_or_dir {
let module_file_path = if path.is_dir() {
// the existence of `mod.nu` is verified in parsing time
// so it's safe to use it here.
Value::string(path.join("mod.nu").to_string_lossy(), call.head)
} else {
Value::string(path.to_string_lossy(), call.head)
};
callee_stack.add_env_var("CURRENT_FILE".to_string(), module_file_path);
}
let eval_block = get_eval_block(engine_state);

View File

@ -116,24 +116,30 @@ pub fn version(engine_state: &EngineState, span: Span) -> Result<PipelineData, S
Value::string(features_enabled().join(", "), span),
);
// Get a list of plugin names and versions if present
let installed_plugins = engine_state
.plugins()
.iter()
.map(|x| {
let name = x.identity().name();
if let Some(version) = x.metadata().and_then(|m| m.version) {
format!("{name} {version}")
} else {
name.into()
}
})
.collect::<Vec<_>>();
#[cfg(not(feature = "plugin"))]
let _ = engine_state;
record.push(
"installed_plugins",
Value::string(installed_plugins.join(", "), span),
);
#[cfg(feature = "plugin")]
{
// Get a list of plugin names and versions if present
let installed_plugins = engine_state
.plugins()
.iter()
.map(|x| {
let name = x.identity().name();
if let Some(version) = x.metadata().and_then(|m| m.version) {
format!("{name} {version}")
} else {
name.into()
}
})
.collect::<Vec<_>>();
record.push(
"installed_plugins",
Value::string(installed_plugins.join(", "), span),
);
}
Ok(Value::record(record, span).into_pipeline_data())
}

View File

@ -1,3 +1,4 @@
#![cfg_attr(not(feature = "os"), allow(unused))]
#![doc = include_str!("../README.md")]
mod core_commands;
mod default_context;

View File

@ -5,7 +5,7 @@ edition = "2021"
license = "MIT"
name = "nu-cmd-plugin"
repository = "https://github.com/nushell/nushell/tree/main/crates/nu-cmd-plugin"
version = "0.99.1"
version = "0.101.0"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
@ -13,10 +13,10 @@ version = "0.99.1"
workspace = true
[dependencies]
nu-engine = { path = "../nu-engine", version = "0.99.1" }
nu-path = { path = "../nu-path", version = "0.99.1" }
nu-protocol = { path = "../nu-protocol", version = "0.99.1", features = ["plugin"] }
nu-plugin-engine = { path = "../nu-plugin-engine", version = "0.99.1" }
nu-engine = { path = "../nu-engine", version = "0.101.0" }
nu-path = { path = "../nu-path", version = "0.101.0" }
nu-protocol = { path = "../nu-protocol", version = "0.101.0", features = ["plugin"] }
nu-plugin-engine = { path = "../nu-plugin-engine", version = "0.101.0" }
itertools = { workspace = true }

View File

@ -119,7 +119,7 @@ apparent the next time `nu` is next launched with that plugin registry file.
let metadata = interface.get_metadata()?;
let commands = interface.get_signature()?;
modify_plugin_file(engine_state, stack, call.head, custom_path, |contents| {
modify_plugin_file(engine_state, stack, call.head, &custom_path, |contents| {
// Update the file with the received metadata and signatures
let item = PluginRegistryItem::new(plugin.identity(), metadata, commands);
contents.upsert_plugin(item);

View File

@ -1,5 +1,8 @@
use itertools::Itertools;
use itertools::{EitherOrBoth, Itertools};
use nu_engine::command_prelude::*;
use nu_protocol::{IntoValue, PluginRegistryItemData};
use crate::util::read_plugin_file;
#[derive(Clone)]
pub struct PluginList;
@ -17,7 +20,7 @@ impl Command for PluginList {
[
("name".into(), Type::String),
("version".into(), Type::String),
("is_running".into(), Type::Bool),
("status".into(), Type::String),
("pid".into(), Type::Int),
("filename".into(), Type::String),
("shell".into(), Type::String),
@ -26,11 +29,54 @@ impl Command for PluginList {
.into(),
),
)
.named(
"plugin-config",
SyntaxShape::Filepath,
"Use a plugin registry file other than the one set in `$nu.plugin-path`",
None,
)
.switch(
"engine",
"Show info for plugins that are loaded into the engine only.",
Some('e'),
)
.switch(
"registry",
"Show info for plugins from the registry file only.",
Some('r'),
)
.category(Category::Plugin)
}
fn description(&self) -> &str {
"List installed plugins."
"List loaded and installed plugins."
}
fn extra_description(&self) -> &str {
r#"
The `status` column will contain one of the following values:
- `added`: The plugin is present in the plugin registry file, but not in
the engine.
- `loaded`: The plugin is present both in the plugin registry file and in
the engine, but is not running.
- `running`: The plugin is currently running, and the `pid` column should
contain its process ID.
- `modified`: The plugin state present in the plugin registry file is different
from the state in the engine.
- `removed`: The plugin is still loaded in the engine, but is not present in
the plugin registry file.
- `invalid`: The data in the plugin registry file couldn't be deserialized,
and the plugin most likely needs to be added again.
`running` takes priority over any other status. Unless `--registry` is used
or the plugin has not been loaded yet, the values of `version`, `filename`,
`shell`, and `commands` reflect the values in the engine and not the ones in
the plugin registry file.
See also: `plugin use`
"#
.trim()
}
fn search_terms(&self) -> Vec<&str> {
@ -45,7 +91,7 @@ impl Command for PluginList {
result: Some(Value::test_list(vec![Value::test_record(record! {
"name" => Value::test_string("inc"),
"version" => Value::test_string(env!("CARGO_PKG_VERSION")),
"is_running" => Value::test_bool(true),
"status" => Value::test_string("running"),
"pid" => Value::test_int(106480),
"filename" => if cfg!(windows) {
Value::test_string(r"C:\nu\plugins\nu_plugin_inc.exe")
@ -67,58 +113,189 @@ impl Command for PluginList {
fn run(
&self,
engine_state: &EngineState,
_stack: &mut Stack,
stack: &mut Stack,
call: &Call,
_input: PipelineData,
) -> Result<PipelineData, ShellError> {
let head = call.head;
let custom_path = call.get_flag(engine_state, stack, "plugin-config")?;
let engine_mode = call.has_flag(engine_state, stack, "engine")?;
let registry_mode = call.has_flag(engine_state, stack, "registry")?;
// Group plugin decls by plugin identity
let decls = engine_state.plugin_decls().into_group_map_by(|decl| {
decl.plugin_identity()
.expect("plugin decl should have identity")
});
let plugins_info = match (engine_mode, registry_mode) {
// --engine and --registry together is equivalent to the default.
(false, false) | (true, true) => {
if engine_state.plugin_path.is_some() || custom_path.is_some() {
let plugins_in_engine = get_plugins_in_engine(engine_state);
let plugins_in_registry =
get_plugins_in_registry(engine_state, stack, call.head, &custom_path)?;
merge_plugin_info(plugins_in_engine, plugins_in_registry)
} else {
// Don't produce error when running nu --no-config-file
get_plugins_in_engine(engine_state)
}
}
(true, false) => get_plugins_in_engine(engine_state),
(false, true) => get_plugins_in_registry(engine_state, stack, call.head, &custom_path)?,
};
// Build plugins list
let list = engine_state.plugins().iter().map(|plugin| {
// Find commands that belong to the plugin
let commands = decls.get(plugin.identity())
.into_iter()
.flat_map(|decls| {
decls.iter().map(|decl| Value::string(decl.name(), head))
})
.collect();
let pid = plugin
.pid()
.map(|p| Value::int(p as i64, head))
.unwrap_or(Value::nothing(head));
let shell = plugin
.identity()
.shell()
.map(|s| Value::string(s.to_string_lossy(), head))
.unwrap_or(Value::nothing(head));
let metadata = plugin.metadata();
let version = metadata
.and_then(|m| m.version)
.map(|s| Value::string(s, head))
.unwrap_or(Value::nothing(head));
let record = record! {
"name" => Value::string(plugin.identity().name(), head),
"version" => version,
"is_running" => Value::bool(plugin.is_running(), head),
"pid" => pid,
"filename" => Value::string(plugin.identity().filename().to_string_lossy(), head),
"shell" => shell,
"commands" => Value::list(commands, head),
};
Value::record(record, head)
}).collect();
Ok(Value::list(list, head).into_pipeline_data())
Ok(plugins_info.into_value(call.head).into_pipeline_data())
}
}
#[derive(Debug, Clone, IntoValue, PartialOrd, Ord, PartialEq, Eq)]
struct PluginInfo {
name: String,
version: Option<String>,
status: PluginStatus,
pid: Option<u32>,
filename: String,
shell: Option<String>,
commands: Vec<String>,
}
#[derive(Debug, Clone, Copy, IntoValue, PartialOrd, Ord, PartialEq, Eq)]
#[nu_value(rename_all = "snake_case")]
enum PluginStatus {
Added,
Loaded,
Running,
Modified,
Removed,
Invalid,
}
fn get_plugins_in_engine(engine_state: &EngineState) -> Vec<PluginInfo> {
// Group plugin decls by plugin identity
let decls = engine_state.plugin_decls().into_group_map_by(|decl| {
decl.plugin_identity()
.expect("plugin decl should have identity")
});
// Build plugins list
engine_state
.plugins()
.iter()
.map(|plugin| {
// Find commands that belong to the plugin
let commands = decls
.get(plugin.identity())
.into_iter()
.flat_map(|decls| decls.iter().map(|decl| decl.name().to_owned()))
.sorted()
.collect();
PluginInfo {
name: plugin.identity().name().into(),
version: plugin.metadata().and_then(|m| m.version),
status: if plugin.pid().is_some() {
PluginStatus::Running
} else {
PluginStatus::Loaded
},
pid: plugin.pid(),
filename: plugin.identity().filename().to_string_lossy().into_owned(),
shell: plugin
.identity()
.shell()
.map(|path| path.to_string_lossy().into_owned()),
commands,
}
})
.sorted()
.collect()
}
fn get_plugins_in_registry(
engine_state: &EngineState,
stack: &mut Stack,
span: Span,
custom_path: &Option<Spanned<String>>,
) -> Result<Vec<PluginInfo>, ShellError> {
let plugin_file_contents = read_plugin_file(engine_state, stack, span, custom_path)?;
let plugins_info = plugin_file_contents
.plugins
.into_iter()
.map(|plugin| {
let mut info = PluginInfo {
name: plugin.name,
version: None,
status: PluginStatus::Added,
pid: None,
filename: plugin.filename.to_string_lossy().into_owned(),
shell: plugin.shell.map(|path| path.to_string_lossy().into_owned()),
commands: vec![],
};
if let PluginRegistryItemData::Valid { metadata, commands } = plugin.data {
info.version = metadata.version;
info.commands = commands
.into_iter()
.map(|command| command.sig.name)
.sorted()
.collect();
} else {
info.status = PluginStatus::Invalid;
}
info
})
.sorted()
.collect();
Ok(plugins_info)
}
/// If no options are provided, the command loads from both the plugin list in the engine and what's
/// in the registry file. We need to reconcile the two to set the proper states and make sure that
/// new plugins that were added to the plugin registry file show up.
fn merge_plugin_info(
from_engine: Vec<PluginInfo>,
from_registry: Vec<PluginInfo>,
) -> Vec<PluginInfo> {
from_engine
.into_iter()
.merge_join_by(from_registry, |info_a, info_b| {
info_a.name.cmp(&info_b.name)
})
.map(|either_or_both| match either_or_both {
// Exists in the engine, but not in the registry file
EitherOrBoth::Left(info) => PluginInfo {
status: match info.status {
PluginStatus::Running => info.status,
// The plugin is not in the registry file, so it should be marked as `removed`
_ => PluginStatus::Removed,
},
..info
},
// Exists in the registry file, but not in the engine
EitherOrBoth::Right(info) => info,
// Exists in both
EitherOrBoth::Both(info_engine, info_registry) => PluginInfo {
status: match (info_engine.status, info_registry.status) {
// Above all, `running` should be displayed if the plugin is running
(PluginStatus::Running, _) => PluginStatus::Running,
// `invalid` takes precedence over other states because the user probably wants
// to fix it
(_, PluginStatus::Invalid) => PluginStatus::Invalid,
// Display `modified` if the state in the registry is different somehow
_ if info_engine.is_modified(&info_registry) => PluginStatus::Modified,
// Otherwise, `loaded` (it's not running)
_ => PluginStatus::Loaded,
},
..info_engine
},
})
.sorted()
.collect()
}
impl PluginInfo {
/// True if the plugin info shows some kind of change (other than status/pid) relative to the
/// other
fn is_modified(&self, other: &PluginInfo) -> bool {
self.name != other.name
|| self.filename != other.filename
|| self.shell != other.shell
|| self.commands != other.commands
}
}

View File

@ -87,7 +87,7 @@ fixed with `plugin add`.
let filename = canonicalize_possible_filename_arg(engine_state, stack, &name.item);
modify_plugin_file(engine_state, stack, call.head, custom_path, |contents| {
modify_plugin_file(engine_state, stack, call.head, &custom_path, |contents| {
if let Some(index) = contents
.plugins
.iter()

View File

@ -6,18 +6,17 @@ use std::{
path::PathBuf,
};
pub(crate) fn modify_plugin_file(
fn get_plugin_registry_file_path(
engine_state: &EngineState,
stack: &mut Stack,
span: Span,
custom_path: Option<Spanned<String>>,
operate: impl FnOnce(&mut PluginRegistryFile) -> Result<(), ShellError>,
) -> Result<(), ShellError> {
custom_path: &Option<Spanned<String>>,
) -> Result<PathBuf, ShellError> {
#[allow(deprecated)]
let cwd = current_dir(engine_state, stack)?;
let plugin_registry_file_path = if let Some(ref custom_path) = custom_path {
nu_path::expand_path_with(&custom_path.item, cwd, true)
if let Some(ref custom_path) = custom_path {
Ok(nu_path::expand_path_with(&custom_path.item, cwd, true))
} else {
engine_state
.plugin_path
@ -28,8 +27,53 @@ pub(crate) fn modify_plugin_file(
span: Some(span),
help: Some("you may be running `nu` with --no-config-file".into()),
inner: vec![],
})?
};
})
}
}
pub(crate) fn read_plugin_file(
engine_state: &EngineState,
stack: &mut Stack,
span: Span,
custom_path: &Option<Spanned<String>>,
) -> Result<PluginRegistryFile, ShellError> {
let plugin_registry_file_path =
get_plugin_registry_file_path(engine_state, stack, span, custom_path)?;
let file_span = custom_path.as_ref().map(|p| p.span).unwrap_or(span);
// Try to read the plugin file if it exists
if fs::metadata(&plugin_registry_file_path).is_ok_and(|m| m.len() > 0) {
PluginRegistryFile::read_from(
File::open(&plugin_registry_file_path).map_err(|err| ShellError::IOErrorSpanned {
msg: format!(
"failed to read `{}`: {}",
plugin_registry_file_path.display(),
err
),
span: file_span,
})?,
Some(file_span),
)
} else if let Some(path) = custom_path {
Err(ShellError::FileNotFound {
file: path.item.clone(),
span: path.span,
})
} else {
Ok(PluginRegistryFile::default())
}
}
pub(crate) fn modify_plugin_file(
engine_state: &EngineState,
stack: &mut Stack,
span: Span,
custom_path: &Option<Spanned<String>>,
operate: impl FnOnce(&mut PluginRegistryFile) -> Result<(), ShellError>,
) -> Result<(), ShellError> {
let plugin_registry_file_path =
get_plugin_registry_file_path(engine_state, stack, span, custom_path)?;
let file_span = custom_path.as_ref().map(|p| p.span).unwrap_or(span);
@ -91,18 +135,24 @@ pub(crate) fn get_plugin_dirs(
engine_state: &EngineState,
stack: &Stack,
) -> impl Iterator<Item = String> {
// Get the NU_PLUGIN_DIRS constant or env var
// Get the NU_PLUGIN_DIRS from the constant and/or env var
let working_set = StateWorkingSet::new(engine_state);
let value = working_set
let dirs_from_const = working_set
.find_variable(b"$NU_PLUGIN_DIRS")
.and_then(|var_id| working_set.get_constant(var_id).ok())
.or_else(|| stack.get_env_var(engine_state, "NU_PLUGIN_DIRS"))
.cloned(); // TODO: avoid this clone
// Get all of the strings in the list, if possible
value
.cloned() // TODO: avoid this clone
.into_iter()
.flat_map(|value| value.into_list().ok())
.flatten()
.flat_map(|list_item| list_item.coerce_into_string().ok())
.flat_map(|list_item| list_item.coerce_into_string().ok());
let dirs_from_env = stack
.get_env_var(engine_state, "NU_PLUGIN_DIRS")
.cloned() // TODO: avoid this clone
.into_iter()
.flat_map(|value| value.into_list().ok())
.flatten()
.flat_map(|list_item| list_item.coerce_into_string().ok());
dirs_from_const.chain(dirs_from_env)
}

View File

@ -5,7 +5,7 @@ repository = "https://github.com/nushell/nushell/tree/main/crates/nu-color-confi
edition = "2021"
license = "MIT"
name = "nu-color-config"
version = "0.99.1"
version = "0.101.0"
[lib]
bench = false
@ -14,12 +14,12 @@ bench = false
workspace = true
[dependencies]
nu-protocol = { path = "../nu-protocol", version = "0.99.1" }
nu-engine = { path = "../nu-engine", version = "0.99.1" }
nu-json = { path = "../nu-json", version = "0.99.1" }
nu-protocol = { path = "../nu-protocol", version = "0.101.0", default-features = false }
nu-engine = { path = "../nu-engine", version = "0.101.0", default-features = false }
nu-json = { path = "../nu-json", version = "0.101.0" }
nu-ansi-term = { workspace = true }
serde = { workspace = true, features = ["derive"] }
[dev-dependencies]
nu-test-support = { path = "../nu-test-support", version = "0.99.1" }
nu-test-support = { path = "../nu-test-support", version = "0.101.0" }

View File

@ -5,7 +5,6 @@ use nu_protocol::{Config, Value};
// The default colors for shapes, used when there is no config for them.
pub fn default_shape_color(shape: &str) -> Style {
match shape {
"shape_and" => Style::new().fg(Color::Purple).bold(),
"shape_binary" => Style::new().fg(Color::Purple).bold(),
"shape_block" => Style::new().fg(Color::Blue).bold(),
"shape_bool" => Style::new().fg(Color::LightCyan),
@ -30,7 +29,6 @@ pub fn default_shape_color(shape: &str) -> Style {
"shape_match_pattern" => Style::new().fg(Color::Green),
"shape_nothing" => Style::new().fg(Color::LightCyan),
"shape_operator" => Style::new().fg(Color::Yellow),
"shape_or" => Style::new().fg(Color::Purple).bold(),
"shape_pipe" => Style::new().fg(Color::Purple).bold(),
"shape_range" => Style::new().fg(Color::Yellow).bold(),
"shape_raw_string" => Style::new().fg(Color::LightMagenta).bold(),

View File

@ -223,7 +223,7 @@ fn test_computable_style_closure_basic() {
];
let actual_repl = nu!(cwd: dirs.test(), nu_repl_code(&inp));
assert_eq!(actual_repl.err, "");
assert_eq!(actual_repl.out, "[bell.obj, book.obj, candle.obj]");
assert_eq!(actual_repl.out, r#"["bell.obj", "book.obj", "candle.obj"]"#);
});
}

View File

@ -5,7 +5,7 @@ edition = "2021"
license = "MIT"
name = "nu-command"
repository = "https://github.com/nushell/nushell/tree/main/crates/nu-command"
version = "0.99.1"
version = "0.101.0"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
@ -16,21 +16,21 @@ bench = false
workspace = true
[dependencies]
nu-cmd-base = { path = "../nu-cmd-base", version = "0.99.1" }
nu-color-config = { path = "../nu-color-config", version = "0.99.1" }
nu-engine = { path = "../nu-engine", version = "0.99.1" }
nu-glob = { path = "../nu-glob", version = "0.99.1" }
nu-json = { path = "../nu-json", version = "0.99.1" }
nu-parser = { path = "../nu-parser", version = "0.99.1" }
nu-path = { path = "../nu-path", version = "0.99.1" }
nu-pretty-hex = { path = "../nu-pretty-hex", version = "0.99.1" }
nu-protocol = { path = "../nu-protocol", version = "0.99.1" }
nu-system = { path = "../nu-system", version = "0.99.1" }
nu-table = { path = "../nu-table", version = "0.99.1" }
nu-term-grid = { path = "../nu-term-grid", version = "0.99.1" }
nu-utils = { path = "../nu-utils", version = "0.99.1" }
nu-cmd-base = { path = "../nu-cmd-base", version = "0.101.0" }
nu-color-config = { path = "../nu-color-config", version = "0.101.0" }
nu-engine = { path = "../nu-engine", version = "0.101.0", default-features = false }
nu-glob = { path = "../nu-glob", version = "0.101.0" }
nu-json = { path = "../nu-json", version = "0.101.0" }
nu-parser = { path = "../nu-parser", version = "0.101.0" }
nu-path = { path = "../nu-path", version = "0.101.0" }
nu-pretty-hex = { path = "../nu-pretty-hex", version = "0.101.0" }
nu-protocol = { path = "../nu-protocol", version = "0.101.0", default-features = false }
nu-system = { path = "../nu-system", version = "0.101.0" }
nu-table = { path = "../nu-table", version = "0.101.0" }
nu-term-grid = { path = "../nu-term-grid", version = "0.101.0" }
nu-utils = { path = "../nu-utils", version = "0.101.0", default-features = false }
nu-ansi-term = { workspace = true }
nuon = { path = "../nuon", version = "0.99.1" }
nuon = { path = "../nuon", version = "0.101.0" }
alphanumeric-sort = { workspace = true }
base64 = { workspace = true }
@ -43,7 +43,7 @@ chardetng = { workspace = true }
chrono = { workspace = true, features = ["std", "unstable-locales", "clock"], default-features = false }
chrono-humanize = { workspace = true }
chrono-tz = { workspace = true }
crossterm = { workspace = true }
crossterm = { workspace = true, optional = true }
csv = { workspace = true }
dialoguer = { workspace = true, default-features = false, features = ["fuzzy-select"] }
digest = { workspace = true, default-features = false }
@ -61,24 +61,26 @@ lscolors = { workspace = true, default-features = false, features = ["nu-ansi-te
md5 = { workspace = true }
mime = { workspace = true }
mime_guess = { workspace = true }
multipart-rs = { workspace = true }
native-tls = { workspace = true }
notify-debouncer-full = { workspace = true, default-features = false }
multipart-rs = { workspace = true, optional = true }
native-tls = { workspace = true, optional = true }
notify-debouncer-full = { workspace = true, default-features = false, optional = true }
num-format = { workspace = true }
num-traits = { workspace = true }
once_cell = { workspace = true }
open = { workspace = true }
os_pipe = { workspace = true }
oem_cp = { workspace = true }
open = { workspace = true, optional = true }
os_pipe = { workspace = true, optional = true }
pathdiff = { workspace = true }
percent-encoding = { workspace = true }
print-positions = { workspace = true }
quick-xml = { workspace = true }
rand = { workspace = true }
rand = { workspace = true, optional = true }
getrandom = { workspace = true, optional = true }
rayon = { workspace = true }
regex = { workspace = true }
roxmltree = { workspace = true }
rusqlite = { workspace = true, features = ["bundled", "backup", "chrono"], optional = true }
rmp = { workspace = true }
scopeguard = { workspace = true }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true, features = ["preserve_order"] }
serde_urlencoded = { workspace = true }
@ -86,29 +88,29 @@ serde_yaml = { workspace = true }
sha2 = { workspace = true }
sysinfo = { workspace = true }
tabled = { workspace = true, features = ["ansi"], default-features = false }
terminal_size = { workspace = true }
titlecase = { workspace = true }
toml = { workspace = true, features = ["preserve_order"] }
unicode-segmentation = { workspace = true }
ureq = { workspace = true, default-features = false, features = ["charset", "gzip", "json", "native-tls"] }
ureq = { workspace = true, default-features = false, features = ["charset", "gzip", "json"] }
url = { workspace = true }
uu_cp = { workspace = true }
uu_mkdir = { workspace = true }
uu_mktemp = { workspace = true }
uu_mv = { workspace = true }
uu_uname = { workspace = true }
uu_whoami = { workspace = true }
uuid = { workspace = true, features = ["v4"] }
uu_cp = { workspace = true, optional = true }
uu_mkdir = { workspace = true, optional = true }
uu_mktemp = { workspace = true, optional = true }
uu_mv = { workspace = true, optional = true }
uu_touch = { workspace = true, optional = true }
uu_uname = { workspace = true, optional = true }
uu_whoami = { workspace = true, optional = true }
uuid = { workspace = true, features = ["v4"], optional = true }
v_htmlescape = { workspace = true }
wax = { workspace = true }
which = { workspace = true }
which = { workspace = true, optional = true }
unicode-width = { workspace = true }
data-encoding = { version = "2.6.0", features = ["alloc"] }
[target.'cfg(windows)'.dependencies]
winreg = { workspace = true }
[target.'cfg(not(windows))'.dependencies]
[target.'cfg(all(not(windows), not(target_arch = "wasm32")))'.dependencies]
uucore = { workspace = true, features = ["mode"] }
[target.'cfg(unix)'.dependencies]
@ -134,13 +136,59 @@ features = [
workspace = true
[features]
plugin = ["nu-parser/plugin"]
default = ["os"]
os = [
# include other features
"js",
"network",
"nu-protocol/os",
"nu-utils/os",
# os-dependant dependencies
"crossterm",
"notify-debouncer-full",
"open",
"os_pipe",
"uu_cp",
"uu_mkdir",
"uu_mktemp",
"uu_mv",
"uu_touch",
"uu_uname",
"uu_whoami",
"which",
]
# The dependencies listed below need 'getrandom'.
# They work with JS (usually with wasm-bindgen) or regular OS support.
# Hence they are also put under the 'os' feature to avoid repetition.
js = [
"getrandom",
"getrandom/js",
"rand",
"uuid",
]
# These dependencies require networking capabilities, especially the http
# interface requires openssl which is not easy to embed into wasm,
# using rustls could solve this issue.
network = [
"multipart-rs",
"native-tls",
"ureq/native-tls",
"uuid",
]
plugin = [
"nu-parser/plugin",
"os",
]
sqlite = ["rusqlite"]
trash-support = ["trash"]
[dev-dependencies]
nu-cmd-lang = { path = "../nu-cmd-lang", version = "0.99.1" }
nu-test-support = { path = "../nu-test-support", version = "0.99.1" }
nu-cmd-lang = { path = "../nu-cmd-lang", version = "0.101.0" }
nu-test-support = { path = "../nu-test-support", version = "0.101.0" }
dirs = { workspace = true }
mockito = { workspace = true, default-features = false }

View File

@ -1,4 +1,4 @@
use nu_engine::{command_prelude::*, get_eval_expression};
use nu_engine::command_prelude::*;
#[derive(Clone)]
pub struct BytesBuild;
@ -49,8 +49,7 @@ impl Command for BytesBuild {
_input: PipelineData,
) -> Result<PipelineData, ShellError> {
let mut output = vec![];
let eval_expression = get_eval_expression(engine_state);
for val in call.rest_iter_flattened(engine_state, stack, eval_expression, 0)? {
for val in call.rest::<Value>(engine_state, stack, 0)? {
let val_span = val.span();
match val {
Value::Binary { mut val, .. } => output.append(&mut val),

View File

@ -1,5 +1,5 @@
use chrono::{DateTime, FixedOffset};
use nu_protocol::{ShellError, Span, Value};
use nu_protocol::{Filesize, ShellError, Span, Value};
use std::hash::{Hash, Hasher};
/// A subset of [`Value`], which is hashable.
@ -30,7 +30,7 @@ pub enum HashableValue {
span: Span,
},
Filesize {
val: i64,
val: Filesize,
span: Span,
},
Duration {
@ -198,7 +198,10 @@ mod test {
(Value::int(1, span), HashableValue::Int { val: 1, span }),
(
Value::filesize(1, span),
HashableValue::Filesize { val: 1, span },
HashableValue::Filesize {
val: 1.into(),
span,
},
),
(
Value::duration(1, span),

View File

@ -167,7 +167,7 @@ fn fill(
fn action(input: &Value, args: &Arguments, span: Span) -> Value {
match input {
Value::Int { val, .. } => fill_int(*val, args, span),
Value::Filesize { val, .. } => fill_int(*val, args, span),
Value::Filesize { val, .. } => fill_int(val.get(), args, span),
Value::Float { val, .. } => fill_float(*val, args, span),
Value::String { val, .. } => fill_string(val, args, span),
// Propagate errors by explicitly matching them before the final case.

View File

@ -1,7 +1,7 @@
use nu_cmd_base::input_handler::{operate, CmdArgument};
use nu_engine::command_prelude::*;
pub struct Arguments {
struct Arguments {
cell_paths: Option<Vec<CellPath>>,
compact: bool,
}
@ -142,12 +142,12 @@ fn into_binary(
}
}
pub fn action(input: &Value, _args: &Arguments, span: Span) -> Value {
fn action(input: &Value, _args: &Arguments, span: Span) -> Value {
let value = match input {
Value::Binary { .. } => input.clone(),
Value::Int { val, .. } => Value::binary(val.to_ne_bytes().to_vec(), span),
Value::Float { val, .. } => Value::binary(val.to_ne_bytes().to_vec(), span),
Value::Filesize { val, .. } => Value::binary(val.to_ne_bytes().to_vec(), span),
Value::Filesize { val, .. } => Value::binary(val.get().to_ne_bytes().to_vec(), span),
Value::String { val, .. } => Value::binary(val.as_bytes().to_vec(), span),
Value::Bool { val, .. } => Value::binary(i64::from(*val).to_ne_bytes().to_vec(), span),
Value::Duration { val, .. } => Value::binary(val.to_ne_bytes().to_vec(), span),

View File

@ -1,5 +1,5 @@
use crate::{generate_strftime_list, parse_date_from_string};
use chrono::{DateTime, FixedOffset, Local, NaiveDateTime, NaiveTime, TimeZone, Utc};
use chrono::{DateTime, FixedOffset, Local, NaiveDateTime, TimeZone, Utc};
use human_date_parser::{from_human_time, ParseResult};
use nu_cmd_base::input_handler::{operate, CmdArgument};
use nu_engine::command_prelude::*;
@ -185,11 +185,13 @@ impl Command for SubCommand {
example: "'16.11.1984 8:00 am' | into datetime --format '%d.%m.%Y %H:%M %P'",
#[allow(clippy::inconsistent_digit_grouping)]
result: Some(Value::date(
DateTime::from_naive_utc_and_offset(
NaiveDateTime::parse_from_str("16.11.1984 8:00 am", "%d.%m.%Y %H:%M %P")
.expect("date calculation should not fail in test"),
*Local::now().offset(),
),
Local
.from_local_datetime(
&NaiveDateTime::parse_from_str("16.11.1984 8:00 am", "%d.%m.%Y %H:%M %P")
.expect("date calculation should not fail in test"),
)
.unwrap()
.with_timezone(Local::now().offset()),
Span::test_data(),
)),
},
@ -275,12 +277,13 @@ fn action(input: &Value, args: &Arguments, head: Span) -> Value {
if let Ok(date) = from_human_time(&input_val) {
match date {
ParseResult::Date(date) => {
let time = NaiveTime::from_hms_opt(0, 0, 0).expect("valid time");
let time = Local::now().time();
let combined = date.and_time(time);
let dt_fixed = DateTime::from_naive_utc_and_offset(
combined,
*Local::now().offset(),
);
let local_offset = *Local::now().offset();
let dt_fixed =
TimeZone::from_local_datetime(&local_offset, &combined)
.single()
.unwrap_or_default();
return Value::date(dt_fixed, span);
}
ParseResult::DateTime(date) => {
@ -289,10 +292,11 @@ fn action(input: &Value, args: &Arguments, head: Span) -> Value {
ParseResult::Time(time) => {
let date = Local::now().date_naive();
let combined = date.and_time(time);
let dt_fixed = DateTime::from_naive_utc_and_offset(
combined,
*Local::now().offset(),
);
let local_offset = *Local::now().offset();
let dt_fixed =
TimeZone::from_local_datetime(&local_offset, &combined)
.single()
.unwrap_or_default();
return Value::date(dt_fixed, span);
}
}
@ -386,13 +390,15 @@ fn action(input: &Value, args: &Arguments, head: Span) -> Value {
Ok(d) => Value::date ( d, head ),
Err(reason) => {
match NaiveDateTime::parse_from_str(val, &dt.0) {
Ok(d) => Value::date (
DateTime::from_naive_utc_and_offset(
d,
*Local::now().offset(),
),
head,
),
Ok(d) => {
let local_offset = *Local::now().offset();
let dt_fixed =
TimeZone::from_local_datetime(&local_offset, &d)
.single()
.unwrap_or_default();
Value::date (dt_fixed,head)
}
Err(_) => {
Value::error (
ShellError::CantConvert { to_type: format!("could not parse as datetime using format '{}'", dt.0), from_type: reason.to_string(), span: head, help: Some("you can use `into datetime` without a format string to enable flexible parsing".to_string()) },
@ -503,7 +509,14 @@ mod tests {
}
#[test]
#[ignore]
fn takes_a_date_format_without_timezone() {
// Ignoring this test for now because we changed the human-date-parser to use
// the users timezone instead of UTC. We may continue to tweak this behavior.
// Another hacky solution is to set the timezone to UTC in the test, which works
// on MacOS and Linux but hasn't been tested on Windows. Plus it kind of defeats
// the purpose of a "without_timezone" test.
// std::env::set_var("TZ", "UTC");
let date_str = Value::test_string("16.11.1984 8:00 am");
let fmt_options = Some(DatetimeFormat("%d.%m.%Y %H:%M %P".to_string()));
let args = Arguments {
@ -513,12 +526,16 @@ mod tests {
};
let actual = action(&date_str, &args, Span::test_data());
let expected = Value::date(
DateTime::from_naive_utc_and_offset(
NaiveDateTime::parse_from_str("16.11.1984 8:00 am", "%d.%m.%Y %H:%M %P").unwrap(),
*Local::now().offset(),
),
Local
.from_local_datetime(
&NaiveDateTime::parse_from_str("16.11.1984 8:00 am", "%d.%m.%Y %H:%M %P")
.unwrap(),
)
.unwrap()
.with_timezone(Local::now().offset()),
Span::test_data(),
);
assert_eq!(actual, expected)
}

View File

@ -116,7 +116,7 @@ impl Command for SubCommand {
}
}
pub fn action(input: &Value, _args: &CellPathOnlyArgs, span: Span) -> Value {
fn action(input: &Value, _args: &CellPathOnlyArgs, span: Span) -> Value {
let value_span = input.span();
match input {
Value::Filesize { .. } => input.clone(),

View File

@ -253,13 +253,13 @@ fn action(input: &Value, args: &Arguments, span: Span) -> Value {
convert_int(input, span, radix)
}
}
Value::Filesize { val, .. } => Value::int(*val, span),
Value::Filesize { val, .. } => Value::int(val.get(), span),
Value::Float { val, .. } => Value::int(
{
if radix == 10 {
*val as i64
} else {
match convert_int(&Value::int(*val as i64, span), span, radix).as_i64() {
match convert_int(&Value::int(*val as i64, span), span, radix).as_int() {
Ok(v) => v,
_ => {
return Value::error(

View File

@ -99,6 +99,11 @@ impl Command for SubCommand {
"timezone" => Value::test_string("+02:00"),
})),
},
Example {
description: "convert date components to table columns",
example: "2020-04-12T22:10:57+02:00 | into record | transpose | transpose -r",
result: None,
},
]
}
}

View File

@ -1,9 +1,9 @@
use crate::parse_date_from_string;
use nu_engine::command_prelude::*;
use nu_protocol::PipelineIterator;
use once_cell::sync::Lazy;
use regex::{Regex, RegexBuilder};
use std::collections::HashSet;
use std::sync::LazyLock;
#[derive(Clone)]
pub struct IntoValue;
@ -18,7 +18,7 @@ impl Command for IntoValue {
.input_output_types(vec![(Type::table(), Type::table())])
.named(
"columns",
SyntaxShape::Table(vec![]),
SyntaxShape::List(Box::new(SyntaxShape::Any)),
"list of columns to update",
Some('c'),
)
@ -271,8 +271,9 @@ const DATETIME_DMY_PATTERN: &str = r#"(?x)
$
"#;
static DATETIME_DMY_RE: Lazy<Regex> =
Lazy::new(|| Regex::new(DATETIME_DMY_PATTERN).expect("datetime_dmy_pattern should be valid"));
static DATETIME_DMY_RE: LazyLock<Regex> = LazyLock::new(|| {
Regex::new(DATETIME_DMY_PATTERN).expect("datetime_dmy_pattern should be valid")
});
const DATETIME_YMD_PATTERN: &str = r#"(?x)
^
['"]? # optional quotes
@ -297,8 +298,9 @@ const DATETIME_YMD_PATTERN: &str = r#"(?x)
['"]? # optional quotes
$
"#;
static DATETIME_YMD_RE: Lazy<Regex> =
Lazy::new(|| Regex::new(DATETIME_YMD_PATTERN).expect("datetime_ymd_pattern should be valid"));
static DATETIME_YMD_RE: LazyLock<Regex> = LazyLock::new(|| {
Regex::new(DATETIME_YMD_PATTERN).expect("datetime_ymd_pattern should be valid")
});
//2023-03-24 16:44:17.865147299 -05:00
const DATETIME_YMDZ_PATTERN: &str = r#"(?x)
^
@ -331,23 +333,24 @@ const DATETIME_YMDZ_PATTERN: &str = r#"(?x)
['"]? # optional quotes
$
"#;
static DATETIME_YMDZ_RE: Lazy<Regex> =
Lazy::new(|| Regex::new(DATETIME_YMDZ_PATTERN).expect("datetime_ymdz_pattern should be valid"));
static DATETIME_YMDZ_RE: LazyLock<Regex> = LazyLock::new(|| {
Regex::new(DATETIME_YMDZ_PATTERN).expect("datetime_ymdz_pattern should be valid")
});
static FLOAT_RE: Lazy<Regex> = Lazy::new(|| {
static FLOAT_RE: LazyLock<Regex> = LazyLock::new(|| {
Regex::new(r"^\s*[-+]?((\d*\.\d+)([eE][-+]?\d+)?|inf|NaN|(\d+)[eE][-+]?\d+|\d+\.)$")
.expect("float pattern should be valid")
});
static INTEGER_RE: Lazy<Regex> =
Lazy::new(|| Regex::new(r"^\s*-?(\d+)$").expect("integer pattern should be valid"));
static INTEGER_RE: LazyLock<Regex> =
LazyLock::new(|| Regex::new(r"^\s*-?(\d+)$").expect("integer pattern should be valid"));
static INTEGER_WITH_DELIMS_RE: Lazy<Regex> = Lazy::new(|| {
static INTEGER_WITH_DELIMS_RE: LazyLock<Regex> = LazyLock::new(|| {
Regex::new(r"^\s*-?(\d{1,3}([,_]\d{3})+)$")
.expect("integer with delimiters pattern should be valid")
});
static BOOLEAN_RE: Lazy<Regex> = Lazy::new(|| {
static BOOLEAN_RE: LazyLock<Regex> = LazyLock::new(|| {
RegexBuilder::new(r"^\s*(true)$|^(false)$")
.case_insensitive(true)
.build()

View File

@ -359,7 +359,6 @@ fn nu_value_to_sqlite_type(val: &Value) -> Result<&'static str, ShellError> {
| Type::Custom(_)
| Type::Error
| Type::List(_)
| Type::ListStream
| Type::Range
| Type::Record(_)
| Type::Signature

View File

@ -421,7 +421,7 @@ pub fn value_to_sql(value: Value) -> Result<Box<dyn rusqlite::ToSql>, ShellError
Value::Bool { val, .. } => Box::new(val),
Value::Int { val, .. } => Box::new(val),
Value::Float { val, .. } => Box::new(val),
Value::Filesize { val, .. } => Box::new(val),
Value::Filesize { val, .. } => Box::new(val.get()),
Value::Duration { val, .. } => Box::new(val),
Value::Date { val, .. } => Box::new(val),
Value::String { val, .. } => Box::new(val),

View File

@ -1,6 +1,7 @@
use crate::date::utils::parse_date_from_string;
use chrono::{DateTime, Datelike, FixedOffset, Local, Timelike};
use nu_engine::command_prelude::*;
use nu_protocol::{report_parse_warning, ParseWarning};
#[derive(Clone)]
pub struct SubCommand;
@ -17,7 +18,7 @@ impl Command for SubCommand {
(Type::String, Type::record()),
])
.allow_variants_without_examples(true) // https://github.com/nushell/nushell/issues/7032
.category(Category::Date)
.category(Category::Deprecated)
}
fn description(&self) -> &str {
@ -35,6 +36,17 @@ impl Command for SubCommand {
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let head = call.head;
report_parse_warning(
&StateWorkingSet::new(engine_state),
&ParseWarning::DeprecatedWarning {
old_command: "date to-record".into(),
new_suggestion: "see `into record` command examples".into(),
span: head,
url: "`help into record`".into(),
},
);
let head = call.head;
// This doesn't match explicit nulls
if matches!(input, PipelineData::Empty) {

View File

@ -1,6 +1,7 @@
use crate::date::utils::parse_date_from_string;
use chrono::{DateTime, Datelike, FixedOffset, Local, Timelike};
use nu_engine::command_prelude::*;
use nu_protocol::{report_parse_warning, ParseWarning};
#[derive(Clone)]
pub struct SubCommand;
@ -17,7 +18,7 @@ impl Command for SubCommand {
(Type::String, Type::table()),
])
.allow_variants_without_examples(true) // https://github.com/nushell/nushell/issues/7032
.category(Category::Date)
.category(Category::Deprecated)
}
fn description(&self) -> &str {
@ -36,6 +37,16 @@ impl Command for SubCommand {
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let head = call.head;
report_parse_warning(
&StateWorkingSet::new(engine_state),
&ParseWarning::DeprecatedWarning {
old_command: "date to-table".into(),
new_suggestion: "see `into record` command examples".into(),
span: head,
url: "`help into record`".into(),
},
);
// This doesn't match explicit nulls
if matches!(input, PipelineData::Empty) {
return Err(ShellError::PipelineEmpty { dst_span: head });

View File

@ -1,6 +1,7 @@
use nu_engine::command_prelude::*;
use nu_parser::parse;
use nu_protocol::engine::StateWorkingSet;
use nu_parser::{flatten_block, parse};
use nu_protocol::{engine::StateWorkingSet, record};
use serde_json::{json, Value as JsonValue};
#[derive(Clone)]
pub struct Ast;
@ -16,109 +17,23 @@ impl Command for Ast {
fn signature(&self) -> Signature {
Signature::build("ast")
.input_output_types(vec![(Type::String, Type::record())])
.input_output_types(vec![
(Type::Nothing, Type::table()),
(Type::Nothing, Type::record()),
(Type::Nothing, Type::String),
])
.required(
"pipeline",
SyntaxShape::String,
"The pipeline to print the ast for.",
)
.switch("json", "serialize to json", Some('j'))
.switch("minify", "minify the nuon or json output", Some('m'))
.switch("json", "Serialize to json", Some('j'))
.switch("minify", "Minify the nuon or json output", Some('m'))
.switch("flatten", "An easier to read version of the ast", Some('f'))
.allow_variants_without_examples(true)
.category(Category::Debug)
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
_input: PipelineData,
) -> Result<PipelineData, ShellError> {
let pipeline: Spanned<String> = call.req(engine_state, stack, 0)?;
let to_json = call.has_flag(engine_state, stack, "json")?;
let minify = call.has_flag(engine_state, stack, "minify")?;
let mut working_set = StateWorkingSet::new(engine_state);
let block_output = parse(&mut working_set, None, pipeline.item.as_bytes(), false);
let error_output = working_set.parse_errors.first();
let block_span = match &block_output.span {
Some(span) => span,
None => &pipeline.span,
};
if to_json {
// Get the block as json
let serde_block_str = if minify {
serde_json::to_string(&*block_output)
} else {
serde_json::to_string_pretty(&*block_output)
};
let block_json = match serde_block_str {
Ok(json) => json,
Err(e) => Err(ShellError::CantConvert {
to_type: "string".to_string(),
from_type: "block".to_string(),
span: *block_span,
help: Some(format!(
"Error: {e}\nCan't convert {block_output:?} to string"
)),
})?,
};
// Get the error as json
let serde_error_str = if minify {
serde_json::to_string(&error_output)
} else {
serde_json::to_string_pretty(&error_output)
};
let error_json = match serde_error_str {
Ok(json) => json,
Err(e) => Err(ShellError::CantConvert {
to_type: "string".to_string(),
from_type: "error".to_string(),
span: *block_span,
help: Some(format!(
"Error: {e}\nCan't convert {error_output:?} to string"
)),
})?,
};
// Create a new output record, merging the block and error
let output_record = Value::record(
record! {
"block" => Value::string(block_json, *block_span),
"error" => Value::string(error_json, Span::test_data()),
},
pipeline.span,
);
Ok(output_record.into_pipeline_data())
} else {
let block_value = Value::string(
if minify {
format!("{block_output:?}")
} else {
format!("{block_output:#?}")
},
pipeline.span,
);
let error_value = Value::string(
if minify {
format!("{error_output:?}")
} else {
format!("{error_output:#?}")
},
pipeline.span,
);
let output_record = Value::record(
record! {
"block" => block_value,
"error" => error_value
},
pipeline.span,
);
Ok(output_record.into_pipeline_data())
}
}
fn examples(&self) -> Vec<Example> {
vec![
Example {
@ -147,8 +62,247 @@ impl Command for Ast {
example: "ast 'for x in 1..10 { echo $x ' --json --minify",
result: None,
},
Example {
description: "Print the ast of a string flattened",
example: r#"ast "'hello'" --flatten"#,
result: Some(Value::test_list(vec![Value::test_record(record! {
"content" => Value::test_string("'hello'"),
"shape" => Value::test_string("shape_string"),
"span" => Value::test_record(record! {
"start" => Value::test_int(0),
"end" => Value::test_int(7),}),
})])),
},
Example {
description: "Print the ast of a string flattened, as json, minified",
example: r#"ast "'hello'" --flatten --json --minify"#,
result: Some(Value::test_string(
r#"[{"content":"'hello'","shape":"shape_string","span":{"start":0,"end":7}}]"#,
)),
},
Example {
description: "Print the ast of a pipeline flattened",
example: r#"ast 'ls | sort-by type name -i' --flatten"#,
result: Some(Value::test_list(vec![
Value::test_record(record! {
"content" => Value::test_string("ls"),
"shape" => Value::test_string("shape_external"),
"span" => Value::test_record(record! {
"start" => Value::test_int(0),
"end" => Value::test_int(2),}),
}),
Value::test_record(record! {
"content" => Value::test_string("|"),
"shape" => Value::test_string("shape_pipe"),
"span" => Value::test_record(record! {
"start" => Value::test_int(3),
"end" => Value::test_int(4),}),
}),
Value::test_record(record! {
"content" => Value::test_string("sort-by"),
"shape" => Value::test_string("shape_internalcall"),
"span" => Value::test_record(record! {
"start" => Value::test_int(5),
"end" => Value::test_int(12),}),
}),
Value::test_record(record! {
"content" => Value::test_string("type"),
"shape" => Value::test_string("shape_string"),
"span" => Value::test_record(record! {
"start" => Value::test_int(13),
"end" => Value::test_int(17),}),
}),
Value::test_record(record! {
"content" => Value::test_string("name"),
"shape" => Value::test_string("shape_string"),
"span" => Value::test_record(record! {
"start" => Value::test_int(18),
"end" => Value::test_int(22),}),
}),
Value::test_record(record! {
"content" => Value::test_string("-i"),
"shape" => Value::test_string("shape_flag"),
"span" => Value::test_record(record! {
"start" => Value::test_int(23),
"end" => Value::test_int(25),}),
}),
])),
},
]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
_input: PipelineData,
) -> Result<PipelineData, ShellError> {
let pipeline: Spanned<String> = call.req(engine_state, stack, 0)?;
let to_json = call.has_flag(engine_state, stack, "json")?;
let minify = call.has_flag(engine_state, stack, "minify")?;
let flatten = call.has_flag(engine_state, stack, "flatten")?;
let mut working_set = StateWorkingSet::new(engine_state);
let offset = working_set.next_span_start();
let parsed_block = parse(&mut working_set, None, pipeline.item.as_bytes(), false);
if flatten {
let flat = flatten_block(&working_set, &parsed_block);
if to_json {
let mut json_val: JsonValue = json!([]);
for (span, shape) in flat {
let content =
String::from_utf8_lossy(working_set.get_span_contents(span)).to_string();
let json = json!(
{
"content": content,
"shape": shape.to_string(),
"span": {
"start": span.start.checked_sub(offset),
"end": span.end.checked_sub(offset),
},
}
);
json_merge(&mut json_val, &json);
}
let json_string = if minify {
if let Ok(json_str) = serde_json::to_string(&json_val) {
json_str
} else {
"{}".to_string()
}
} else if let Ok(json_str) = serde_json::to_string_pretty(&json_val) {
json_str
} else {
"{}".to_string()
};
Ok(Value::string(json_string, pipeline.span).into_pipeline_data())
} else {
// let mut rec: Record = Record::new();
let mut rec = vec![];
for (span, shape) in flat {
let content =
String::from_utf8_lossy(working_set.get_span_contents(span)).to_string();
let each_rec = record! {
"content" => Value::test_string(content),
"shape" => Value::test_string(shape.to_string()),
"span" => Value::test_record(record!{
"start" => Value::test_int(match span.start.checked_sub(offset) {
Some(start) => start as i64,
None => 0
}),
"end" => Value::test_int(match span.end.checked_sub(offset) {
Some(end) => end as i64,
None => 0
}),
}),
};
rec.push(Value::test_record(each_rec));
}
Ok(Value::list(rec, pipeline.span).into_pipeline_data())
}
} else {
let error_output = working_set.parse_errors.first();
let block_span = match &parsed_block.span {
Some(span) => span,
None => &pipeline.span,
};
if to_json {
// Get the block as json
let serde_block_str = if minify {
serde_json::to_string(&*parsed_block)
} else {
serde_json::to_string_pretty(&*parsed_block)
};
let block_json = match serde_block_str {
Ok(json) => json,
Err(e) => Err(ShellError::CantConvert {
to_type: "string".to_string(),
from_type: "block".to_string(),
span: *block_span,
help: Some(format!(
"Error: {e}\nCan't convert {parsed_block:?} to string"
)),
})?,
};
// Get the error as json
let serde_error_str = if minify {
serde_json::to_string(&error_output)
} else {
serde_json::to_string_pretty(&error_output)
};
let error_json = match serde_error_str {
Ok(json) => json,
Err(e) => Err(ShellError::CantConvert {
to_type: "string".to_string(),
from_type: "error".to_string(),
span: *block_span,
help: Some(format!(
"Error: {e}\nCan't convert {error_output:?} to string"
)),
})?,
};
// Create a new output record, merging the block and error
let output_record = Value::record(
record! {
"block" => Value::string(block_json, *block_span),
"error" => Value::string(error_json, Span::test_data()),
},
pipeline.span,
);
Ok(output_record.into_pipeline_data())
} else {
let block_value = Value::string(
if minify {
format!("{parsed_block:?}")
} else {
format!("{parsed_block:#?}")
},
pipeline.span,
);
let error_value = Value::string(
if minify {
format!("{error_output:?}")
} else {
format!("{error_output:#?}")
},
pipeline.span,
);
let output_record = Value::record(
record! {
"block" => block_value,
"error" => error_value
},
pipeline.span,
);
Ok(output_record.into_pipeline_data())
}
}
}
}
fn json_merge(a: &mut JsonValue, b: &JsonValue) {
match (a, b) {
(JsonValue::Object(ref mut a), JsonValue::Object(b)) => {
for (k, v) in b {
json_merge(a.entry(k).or_insert(JsonValue::Null), v);
}
}
(JsonValue::Array(ref mut a), JsonValue::Array(b)) => {
a.extend(b.clone());
}
(JsonValue::Array(ref mut a), JsonValue::Object(b)) => {
a.extend([JsonValue::Object(b.clone())]);
}
(a, b) => {
*a = b.clone();
}
}
}
#[cfg(test)]

View File

@ -101,7 +101,7 @@ fn all_columns(span: Span) -> Value {
let environment = {
let mut env_rec = Record::new();
for val in p.environ() {
if let Some((key, value)) = val.split_once('=') {
if let Some((key, value)) = val.to_string_lossy().split_once('=') {
let is_env_var_a_list = {
{
#[cfg(target_family = "windows")]
@ -146,8 +146,8 @@ fn all_columns(span: Span) -> Value {
"root" => root,
"cwd" => cwd,
"exe_path" => exe_path,
"command" => Value::string(p.cmd().join(" "), span),
"name" => Value::string(p.name(), span),
"command" => Value::string(p.cmd().join(std::ffi::OsStr::new(" ")).to_string_lossy(), span),
"name" => Value::string(p.name().to_string_lossy(), span),
"environment" => environment,
},
span,
@ -177,4 +177,9 @@ fn get_thread_id() -> u64 {
{
nix::sys::pthread::pthread_self() as u64
}
#[cfg(target_arch = "wasm32")]
{
// wasm doesn't have any threads accessible, so we return 0 as a fallback
0
}
}

View File

@ -1,6 +1,6 @@
use super::inspect_table;
use nu_engine::command_prelude::*;
use terminal_size::{terminal_size, Height, Width};
use nu_utils::terminal_size;
#[derive(Clone)]
pub struct Inspect;
@ -38,12 +38,9 @@ impl Command for Inspect {
let original_input = input_val.clone();
let description = input_val.get_type().to_string();
let (cols, _rows) = match terminal_size() {
Some((w, h)) => (Width(w.0), Height(h.0)),
None => (Width(0), Height(0)),
};
let (cols, _rows) = terminal_size().unwrap_or((0, 0));
let table = inspect_table::build_table(input_val, description, cols.0 as usize);
let table = inspect_table::build_table(input_val, description, cols as usize);
// Note that this is printed to stderr. The reason for this is so it doesn't disrupt the regular nushell
// tabular output. If we printed to stdout, nushell would get confused with two outputs.

View File

@ -10,6 +10,7 @@ mod metadata_set;
mod profile;
mod timeit;
mod view;
mod view_blocks;
mod view_files;
mod view_ir;
mod view_source;
@ -27,6 +28,7 @@ pub use metadata_set::MetadataSet;
pub use profile::DebugProfile;
pub use timeit::TimeIt;
pub use view::View;
pub use view_blocks::ViewBlocks;
pub use view_files::ViewFiles;
pub use view_ir::ViewIr;
pub use view_source::ViewSource;

View File

@ -30,8 +30,6 @@ impl Command for DebugProfile {
"Collect pipeline element output values",
Some('v'),
)
.switch("expr", "Collect expression types", Some('x'))
.switch("instructions", "Collect IR instructions", Some('i'))
.switch("lines", "Collect line numbers", Some('l'))
.named(
"max-depth",
@ -48,37 +46,52 @@ impl Command for DebugProfile {
}
fn extra_description(&self) -> &str {
r#"The profiler profiles every evaluated pipeline element inside a closure, stepping into all
r#"The profiler profiles every evaluated instruction inside a closure, stepping into all
commands calls and other blocks/closures.
The output can be heavily customized. By default, the following columns are included:
- depth : Depth of the pipeline element. Each entered block adds one level of depth. How many
- depth : Depth of the instruction. Each entered block adds one level of depth. How many
blocks deep to step into is controlled with the --max-depth option.
- id : ID of the pipeline element
- parent_id : ID of the parent element
- source : Source code of the pipeline element. If the element has multiple lines, only the
first line is used and `...` is appended to the end. Full source code can be shown
with the --expand-source flag.
- duration_ms : How long it took to run the pipeline element in milliseconds.
- (optional) span : Span of the element. Can be viewed via the `view span` command. Enabled with
the --spans flag.
- (optional) expr : The type of expression of the pipeline element. Enabled with the --expr flag.
- (optional) output : The output value of the pipeline element. Enabled with the --values flag.
- id : ID of the instruction
- parent_id : ID of the instruction that created the parent scope
- source : Source code that generated the instruction. If the source code has multiple lines,
only the first line is used and `...` is appended to the end. Full source code can
be shown with the --expand-source flag.
- pc : The index of the instruction within the block.
- instruction : The pretty printed instruction being evaluated.
- duration_ms : How long it took to run the instruction in milliseconds.
- (optional) span : Span associated with the instruction. Can be viewed via the `view span`
command. Enabled with the --spans flag.
- (optional) output : The output value of the instruction. Enabled with the --values flag.
To illustrate the depth and IDs, consider `debug profile { if true { echo 'spam' } }`. There are
three pipeline elements:
To illustrate the depth and IDs, consider `debug profile { do { if true { echo 'spam' } } }`. A unique ID is generated each time an instruction is executed, and there are two levels of depth:
depth id parent_id
0 0 0 debug profile { do { if true { 'spam' } } }
1 1 0 if true { 'spam' }
2 2 1 'spam'
```
depth id parent_id source pc instruction
0 0 0 debug profile { do { if true { 'spam' } } } 0 <start>
1 1 0 { if true { 'spam' } } 0 load-literal %1, closure(2164)
1 2 0 { if true { 'spam' } } 1 push-positional %1
1 3 0 { do { if true { 'spam' } } } 2 redirect-out caller
1 4 0 { do { if true { 'spam' } } } 3 redirect-err caller
1 5 0 do 4 call decl 7 "do", %0
2 6 5 true 0 load-literal %1, bool(true)
2 7 5 if 1 not %1
2 8 5 if 2 branch-if %1, 5
2 9 5 'spam' 3 load-literal %0, string("spam")
2 10 5 if 4 jump 6
2 11 5 { if true { 'spam' } } 6 return %0
1 12 0 { do { if true { 'spam' } } } 5 return %0
```
Each block entered increments depth by 1 and each block left decrements it by one. This way you can
control the profiling granularity. Passing --max-depth=1 to the above would stop at
`if true { 'spam' }`. The id is used to identify each element. The parent_id tells you that 'spam'
was spawned from `if true { 'spam' }` which was spawned from the root `debug profile { ... }`.
control the profiling granularity. Passing --max-depth=1 to the above would stop inside the `do`
at `if true { 'spam' }`. The id is used to identify each element. The parent_id tells you that the
instructions inside the block are being executed because of `do` (5), which in turn was spawned from
the root `debug profile { ... }`.
Note: In some cases, the ordering of piepeline elements might not be intuitive. For example,
For a better understanding of how instructions map to source code, see the `view ir` command.
Note: In some cases, the ordering of pipeline elements might not be intuitive. For example,
`[ a bb cc ] | each { $in | str length }` involves some implicit collects and lazy evaluation
confusing the id/parent_id hierarchy. The --expr flag is helpful for investigating these issues."#
}
@ -94,8 +107,6 @@ confusing the id/parent_id hierarchy. The --expr flag is helpful for investigati
let collect_spans = call.has_flag(engine_state, stack, "spans")?;
let collect_expanded_source = call.has_flag(engine_state, stack, "expanded-source")?;
let collect_values = call.has_flag(engine_state, stack, "values")?;
let collect_exprs = call.has_flag(engine_state, stack, "expr")?;
let collect_instructions = call.has_flag(engine_state, stack, "instructions")?;
let collect_lines = call.has_flag(engine_state, stack, "lines")?;
let max_depth = call
.get_flag(engine_state, stack, "max-depth")?
@ -108,8 +119,8 @@ confusing the id/parent_id hierarchy. The --expr flag is helpful for investigati
collect_source: true,
collect_expanded_source,
collect_values,
collect_exprs,
collect_instructions,
collect_exprs: false,
collect_instructions: true,
collect_lines,
},
call.span(),

View File

@ -1,4 +1,5 @@
use nu_engine::{command_prelude::*, get_eval_block, get_eval_expression_with_input};
use nu_engine::{command_prelude::*, ClosureEvalOnce};
use nu_protocol::engine::Closure;
use std::time::Instant;
#[derive(Clone)]
@ -10,16 +11,18 @@ impl Command for TimeIt {
}
fn description(&self) -> &str {
"Time the running time of a block."
"Time how long it takes a closure to run."
}
fn extra_description(&self) -> &str {
"Any pipeline input given to this command is passed to the closure. Note that streaming inputs may affect timing results, and it is recommended to add a `collect` command before this if the input is a stream.
This command will bubble up any errors encountered when running the closure. The return pipeline of the closure is collected into a value and then discarded."
}
fn signature(&self) -> nu_protocol::Signature {
Signature::build("timeit")
.required(
"command",
SyntaxShape::OneOf(vec![SyntaxShape::Block, SyntaxShape::Expression]),
"The command or block to run.",
)
.required("command", SyntaxShape::Closure(None), "The closure to run.")
.input_output_types(vec![
(Type::Any, Type::Duration),
(Type::Nothing, Type::Duration),
@ -46,51 +49,38 @@ impl Command for TimeIt {
// reset outdest, so the command can write to stdout and stderr.
let stack = &mut stack.push_redirection(None, None);
let command_to_run = call.positional_nth(stack, 0);
let closure: Closure = call.req(engine_state, stack, 0)?;
let closure = ClosureEvalOnce::new_preserve_out_dest(engine_state, stack, closure);
// Get the start time after all other computation has been done.
let start_time = Instant::now();
closure.run_with_input(input)?.into_value(call.head)?;
let time = start_time.elapsed();
if let Some(command_to_run) = command_to_run {
if let Some(block_id) = command_to_run.as_block() {
let eval_block = get_eval_block(engine_state);
let block = engine_state.get_block(block_id);
eval_block(engine_state, stack, block, input)?
} else {
let eval_expression_with_input = get_eval_expression_with_input(engine_state);
let expression = &command_to_run.clone();
eval_expression_with_input(engine_state, stack, expression, input)?
}
} else {
PipelineData::empty()
}
.into_value(call.head)?;
let end_time = Instant::now();
let output = Value::duration(
end_time.saturating_duration_since(start_time).as_nanos() as i64,
call.head,
);
let output = Value::duration(time.as_nanos() as i64, call.head);
Ok(output.into_pipeline_data())
}
fn examples(&self) -> Vec<Example> {
vec![
Example {
description: "Times a command within a closure",
description: "Time a closure containing one command",
example: "timeit { sleep 500ms }",
result: None,
},
Example {
description: "Times a command using an existing input",
example: "http get https://www.nushell.sh/book/ | timeit { split chars }",
description: "Time a closure with an input value",
example: "'A really long string' | timeit { split chars }",
result: None,
},
Example {
description: "Times a command invocation",
example: "timeit ls -la",
description: "Time a closure with an input stream",
example: "open some_file.txt | collect | timeit { split chars }",
result: None,
},
Example {
description: "Time a closure containing a pipeline",
example: "timeit { open some_file.txt | split chars }",
result: None,
},
]

View File

@ -0,0 +1,71 @@
use nu_engine::command_prelude::*;
#[derive(Clone)]
pub struct ViewBlocks;
impl Command for ViewBlocks {
fn name(&self) -> &str {
"view blocks"
}
fn description(&self) -> &str {
"View the blocks registered in nushell's EngineState memory."
}
fn extra_description(&self) -> &str {
"These are blocks parsed and loaded at runtime as well as any blocks that accumulate in the repl."
}
fn signature(&self) -> nu_protocol::Signature {
Signature::build("view blocks")
.input_output_types(vec![(
Type::Nothing,
Type::Table(
[
("block_id".into(), Type::Int),
("content".into(), Type::String),
("start".into(), Type::Int),
("end".into(), Type::Int),
]
.into(),
),
)])
.category(Category::Debug)
}
fn run(
&self,
engine_state: &EngineState,
_stack: &mut Stack,
call: &Call,
_input: PipelineData,
) -> Result<PipelineData, ShellError> {
let mut records = vec![];
for block_id in 0..engine_state.num_blocks() {
let block = engine_state.get_block(nu_protocol::BlockId::new(block_id));
if let Some(span) = block.span {
let contents_bytes = engine_state.get_span_contents(span);
let contents_string = String::from_utf8_lossy(contents_bytes);
let cur_rec = record! {
"block_id" => Value::int(block_id as i64, span),
"content" => Value::string(contents_string.trim().to_string(), span),
"start" => Value::int(span.start as i64, span),
"end" => Value::int(span.end as i64, span),
};
records.push(Value::record(cur_rec, span));
}
}
Ok(Value::list(records, call.head).into_pipeline_data())
}
fn examples(&self) -> Vec<Example> {
vec![Example {
description: "View the blocks registered in Nushell's EngineState memory",
example: r#"view blocks"#,
result: None,
}]
}
}

View File

@ -33,6 +33,34 @@ impl Command for ViewSource {
let arg_span = arg.span();
let source = match arg {
Value::Int { val, .. } => {
if let Some(block) =
engine_state.try_get_block(nu_protocol::BlockId::new(val as usize))
{
if let Some(span) = block.span {
let contents = engine_state.get_span_contents(span);
Ok(Value::string(String::from_utf8_lossy(contents), call.head)
.into_pipeline_data())
} else {
Err(ShellError::GenericError {
error: "Cannot view int value".to_string(),
msg: "the block does not have a viewable span".to_string(),
span: Some(arg_span),
help: None,
inner: vec![],
})
}
} else {
Err(ShellError::GenericError {
error: format!("Block Id {} does not exist", arg.coerce_into_string()?),
msg: "this number does not correspond to a block".to_string(),
span: Some(arg_span),
help: None,
inner: vec![],
})
}
}
Value::String { val, .. } => {
if let Some(decl_id) = engine_state.find_decl(val.as_bytes(), &[]) {
// arg is a command
@ -130,7 +158,7 @@ impl Command for ViewSource {
Ok(Value::string(final_contents, call.head).into_pipeline_data())
} else {
Err(ShellError::GenericError {
error: "Cannot view value".to_string(),
error: "Cannot view string value".to_string(),
msg: "the command does not have a viewable block span".to_string(),
span: Some(arg_span),
help: None,
@ -139,7 +167,7 @@ impl Command for ViewSource {
}
} else {
Err(ShellError::GenericError {
error: "Cannot view value".to_string(),
error: "Cannot view string decl value".to_string(),
msg: "the command does not have a viewable block".to_string(),
span: Some(arg_span),
help: None,
@ -155,7 +183,7 @@ impl Command for ViewSource {
.into_pipeline_data())
} else {
Err(ShellError::GenericError {
error: "Cannot view value".to_string(),
error: "Cannot view string module value".to_string(),
msg: "the module does not have a viewable block".to_string(),
span: Some(arg_span),
help: None,
@ -164,7 +192,7 @@ impl Command for ViewSource {
}
} else {
Err(ShellError::GenericError {
error: "Cannot view value".to_string(),
error: "Cannot view string value".to_string(),
msg: "this name does not correspond to a viewable value".to_string(),
span: Some(arg_span),
help: None,

View File

@ -27,6 +27,10 @@ pub fn add_shell_command_context(mut engine_state: EngineState) -> EngineState {
}
// Filters
#[cfg(feature = "rand")]
bind_command! {
Shuffle
}
bind_command! {
All,
Any,
@ -57,6 +61,7 @@ pub fn add_shell_command_context(mut engine_state: EngineState) -> EngineState {
SplitBy,
Take,
Merge,
MergeDeep,
Move,
TakeWhile,
TakeUntil,
@ -64,6 +69,7 @@ pub fn add_shell_command_context(mut engine_state: EngineState) -> EngineState {
Length,
Lines,
ParEach,
ChunkBy,
Prepend,
Range,
Reduce,
@ -71,7 +77,6 @@ pub fn add_shell_command_context(mut engine_state: EngineState) -> EngineState {
Rename,
Reverse,
Select,
Shuffle,
Skip,
SkipUntil,
SkipWhile,
@ -102,6 +107,7 @@ pub fn add_shell_command_context(mut engine_state: EngineState) -> EngineState {
bind_command! {
Path,
PathBasename,
PathSelf,
PathDirname,
PathExists,
PathExpand,
@ -113,6 +119,7 @@ pub fn add_shell_command_context(mut engine_state: EngineState) -> EngineState {
};
// System
#[cfg(feature = "os")]
bind_command! {
Complete,
External,
@ -154,23 +161,27 @@ pub fn add_shell_command_context(mut engine_state: EngineState) -> EngineState {
MetadataSet,
TimeIt,
View,
ViewBlocks,
ViewFiles,
ViewIr,
ViewSource,
ViewSpan,
};
#[cfg(windows)]
#[cfg(all(feature = "os", windows))]
bind_command! { RegistryQuery }
#[cfg(any(
target_os = "android",
target_os = "linux",
target_os = "freebsd",
target_os = "netbsd",
target_os = "openbsd",
target_os = "macos",
target_os = "windows"
#[cfg(all(
feature = "os",
any(
target_os = "android",
target_os = "linux",
target_os = "freebsd",
target_os = "netbsd",
target_os = "openbsd",
target_os = "macos",
target_os = "windows"
)
))]
bind_command! { Ps };
@ -218,6 +229,7 @@ pub fn add_shell_command_context(mut engine_state: EngineState) -> EngineState {
};
// FileSystem
#[cfg(feature = "os")]
bind_command! {
Cd,
Ls,
@ -230,11 +242,13 @@ pub fn add_shell_command_context(mut engine_state: EngineState) -> EngineState {
Rm,
Save,
Touch,
UTouch,
Glob,
Watch,
};
// Platform
#[cfg(feature = "os")]
bind_command! {
Ansi,
AnsiLink,
@ -247,11 +261,13 @@ pub fn add_shell_command_context(mut engine_state: EngineState) -> EngineState {
IsTerminal,
Kill,
Sleep,
Term,
TermSize,
TermQuery,
Whoami,
};
#[cfg(unix)]
#[cfg(all(unix, feature = "os"))]
bind_command! { ULimit };
// Date
@ -335,6 +351,7 @@ pub fn add_shell_command_context(mut engine_state: EngineState) -> EngineState {
WithEnv,
ConfigNu,
ConfigEnv,
ConfigFlatten,
ConfigMeta,
ConfigReset,
};
@ -376,6 +393,7 @@ pub fn add_shell_command_context(mut engine_state: EngineState) -> EngineState {
}
// Network
#[cfg(feature = "network")]
bind_command! {
Http,
HttpDelete,
@ -385,16 +403,20 @@ pub fn add_shell_command_context(mut engine_state: EngineState) -> EngineState {
HttpPost,
HttpPut,
HttpOptions,
Port,
}
bind_command! {
Url,
UrlBuildQuery,
UrlSplitQuery,
UrlDecode,
UrlEncode,
UrlJoin,
UrlParse,
Port,
}
// Random
#[cfg(feature = "rand")]
bind_command! {
Random,
RandomBool,

View File

@ -1,4 +1,6 @@
use nu_engine::{command_prelude::*, get_full_help};
use nu_cmd_base::util::get_editor;
use nu_engine::{command_prelude::*, env_to_strings, get_full_help};
use nu_system::ForegroundChild;
#[derive(Clone)]
pub struct ConfigMeta;
@ -36,3 +38,79 @@ impl Command for ConfigMeta {
vec!["options", "setup"]
}
}
#[cfg(not(feature = "os"))]
pub(super) fn start_editor(
_: &'static str,
_: &EngineState,
_: &mut Stack,
call: &Call,
) -> Result<PipelineData, ShellError> {
Err(ShellError::DisabledOsSupport {
msg: "Running external commands is not available without OS support.".to_string(),
span: Some(call.head),
})
}
#[cfg(feature = "os")]
pub(super) fn start_editor(
config_path: &'static str,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
) -> Result<PipelineData, ShellError> {
// Find the editor executable.
let (editor_name, editor_args) = get_editor(engine_state, stack, call.head)?;
let paths = nu_engine::env::path_str(engine_state, stack, call.head)?;
let cwd = engine_state.cwd(Some(stack))?;
let editor_executable =
crate::which(&editor_name, &paths, cwd.as_ref()).ok_or(ShellError::ExternalCommand {
label: format!("`{editor_name}` not found"),
help: "Failed to find the editor executable".into(),
span: call.head,
})?;
let Some(config_path) = engine_state.get_config_path(config_path) else {
return Err(ShellError::GenericError {
error: format!("Could not find $nu.{config_path}"),
msg: format!("Could not find $nu.{config_path}"),
span: None,
help: None,
inner: vec![],
});
};
let config_path = config_path.to_string_lossy().to_string();
// Create the command.
let mut command = std::process::Command::new(editor_executable);
// Configure PWD.
command.current_dir(cwd);
// Configure environment variables.
let envs = env_to_strings(engine_state, stack)?;
command.env_clear();
command.envs(envs);
// Configure args.
command.arg(config_path);
command.args(editor_args);
// Spawn the child process. On Unix, also put the child process to
// foreground if we're in an interactive session.
#[cfg(windows)]
let child = ForegroundChild::spawn(command)?;
#[cfg(unix)]
let child = ForegroundChild::spawn(
command,
engine_state.is_interactive,
&engine_state.pipeline_externals_state,
)?;
// Wrap the output into a `PipelineData::ByteStream`.
let child = nu_protocol::process::ChildProcess::new(child, None, false, call.head)?;
Ok(PipelineData::ByteStream(
ByteStream::child(child, call.head),
None,
))
}

View File

@ -1,7 +1,4 @@
use nu_cmd_base::util::get_editor;
use nu_engine::{command_prelude::*, env_to_strings};
use nu_protocol::{process::ChildProcess, ByteStream};
use nu_system::ForegroundChild;
use nu_engine::command_prelude::*;
#[derive(Clone)]
pub struct ConfigEnv;
@ -15,7 +12,16 @@ impl Command for ConfigEnv {
Signature::build(self.name())
.category(Category::Env)
.input_output_types(vec![(Type::Nothing, Type::Any)])
.switch("default", "Print default `env.nu` file instead.", Some('d'))
.switch(
"default",
"Print the internal default `env.nu` file instead.",
Some('d'),
)
.switch(
"doc",
"Print a commented `env.nu` with documentation instead.",
Some('s'),
)
// TODO: Signature narrower than what run actually supports theoretically
}
@ -26,18 +32,18 @@ impl Command for ConfigEnv {
fn examples(&self) -> Vec<Example> {
vec![
Example {
description: "allow user to open and update nu env",
description: "open user's env.nu in the default editor",
example: "config env",
result: None,
},
Example {
description: "allow user to print default `env.nu` file",
example: "config env --default,",
description: "pretty-print a commented `env.nu` that explains common settings",
example: "config env --doc | nu-highlight,",
result: None,
},
Example {
description: "allow saving the default `env.nu` locally",
example: "config env --default | save -f ~/.config/nushell/default_env.nu",
description: "pretty-print the internal `env.nu` file which is loaded before the user's environment",
example: "config env --default | nu-highlight,",
result: None,
},
]
@ -50,66 +56,28 @@ impl Command for ConfigEnv {
call: &Call,
_input: PipelineData,
) -> Result<PipelineData, ShellError> {
let default_flag = call.has_flag(engine_state, stack, "default")?;
let doc_flag = call.has_flag(engine_state, stack, "doc")?;
if default_flag && doc_flag {
return Err(ShellError::IncompatibleParameters {
left_message: "can't use `--default` at the same time".into(),
left_span: call.get_flag_span(stack, "default").expect("has flag"),
right_message: "because of `--doc`".into(),
right_span: call.get_flag_span(stack, "doc").expect("has flag"),
});
}
// `--default` flag handling
if call.has_flag(engine_state, stack, "default")? {
let head = call.head;
return Ok(Value::string(nu_utils::get_default_env(), head).into_pipeline_data());
}
// Find the editor executable.
let (editor_name, editor_args) = get_editor(engine_state, stack, call.head)?;
let paths = nu_engine::env::path_str(engine_state, stack, call.head)?;
let cwd = engine_state.cwd(Some(stack))?;
let editor_executable = crate::which(&editor_name, &paths, cwd.as_ref()).ok_or(
ShellError::ExternalCommand {
label: format!("`{editor_name}` not found"),
help: "Failed to find the editor executable".into(),
span: call.head,
},
)?;
// `--doc` flag handling
if doc_flag {
let head = call.head;
return Ok(Value::string(nu_utils::get_doc_env(), head).into_pipeline_data());
}
let Some(env_path) = engine_state.get_config_path("env-path") else {
return Err(ShellError::GenericError {
error: "Could not find $nu.env-path".into(),
msg: "Could not find $nu.env-path".into(),
span: None,
help: None,
inner: vec![],
});
};
let env_path = env_path.to_string_lossy().to_string();
// Create the command.
let mut command = std::process::Command::new(editor_executable);
// Configure PWD.
command.current_dir(cwd);
// Configure environment variables.
let envs = env_to_strings(engine_state, stack)?;
command.env_clear();
command.envs(envs);
// Configure args.
command.arg(env_path);
command.args(editor_args);
// Spawn the child process. On Unix, also put the child process to
// foreground if we're in an interactive session.
#[cfg(windows)]
let child = ForegroundChild::spawn(command)?;
#[cfg(unix)]
let child = ForegroundChild::spawn(
command,
engine_state.is_interactive,
&engine_state.pipeline_externals_state,
)?;
// Wrap the output into a `PipelineData::ByteStream`.
let child = ChildProcess::new(child, None, false, call.head)?;
Ok(PipelineData::ByteStream(
ByteStream::child(child, call.head),
None,
))
super::config_::start_editor("env-path", engine_state, stack, call)
}
}

View File

@ -0,0 +1,195 @@
use nu_engine::command_prelude::*;
use nu_utils::JsonFlattener; // Ensure this import is present // Ensure this import is present
#[derive(Clone)]
pub struct ConfigFlatten;
impl Command for ConfigFlatten {
fn name(&self) -> &str {
"config flatten"
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.category(Category::Debug)
.input_output_types(vec![(Type::Nothing, Type::record())])
}
fn description(&self) -> &str {
"Show the current configuration in a flattened form."
}
fn examples(&self) -> Vec<Example> {
vec![Example {
description: "Show the current configuration in a flattened form",
example: "config flatten",
result: None,
}]
}
fn run(
&self,
engine_state: &EngineState,
_stack: &mut Stack,
call: &Call,
_input: PipelineData,
) -> Result<PipelineData, ShellError> {
// Get the Config instance from the EngineState
let config = engine_state.get_config();
// Serialize the Config instance to JSON
let serialized_config =
serde_json::to_value(&**config).map_err(|err| ShellError::GenericError {
error: format!("Failed to serialize config to json: {err}"),
msg: "".into(),
span: Some(call.head),
help: None,
inner: vec![],
})?;
// Create a JsonFlattener instance with appropriate arguments
let flattener = JsonFlattener {
separator: ".",
alt_array_flattening: false,
preserve_arrays: true,
};
// Flatten the JSON value
let flattened_config_str = flattener.flatten(&serialized_config).to_string();
let flattened_values =
convert_string_to_value(&flattened_config_str, engine_state, call.head)?;
Ok(flattened_values.into_pipeline_data())
}
}
// From here below is taken from `from json`. Would be nice to have a nu-utils-value crate that could be shared
fn convert_string_to_value(
string_input: &str,
engine_state: &EngineState,
span: Span,
) -> Result<Value, ShellError> {
match nu_json::from_str(string_input) {
Ok(value) => Ok(convert_nujson_to_value(None, value, engine_state, span)),
Err(x) => match x {
nu_json::Error::Syntax(_, row, col) => {
let label = x.to_string();
let label_span = convert_row_column_to_span(row, col, string_input);
Err(ShellError::GenericError {
error: "Error while parsing JSON text".into(),
msg: "error parsing JSON text".into(),
span: Some(span),
help: None,
inner: vec![ShellError::OutsideSpannedLabeledError {
src: string_input.into(),
error: "Error while parsing JSON text".into(),
msg: label,
span: label_span,
}],
})
}
x => Err(ShellError::CantConvert {
to_type: format!("structured json data ({x})"),
from_type: "string".into(),
span,
help: None,
}),
},
}
}
fn convert_nujson_to_value(
key: Option<String>,
value: nu_json::Value,
engine_state: &EngineState,
span: Span,
) -> Value {
match value {
nu_json::Value::Array(array) => Value::list(
array
.into_iter()
.map(|x| convert_nujson_to_value(key.clone(), x, engine_state, span))
.collect(),
span,
),
nu_json::Value::Bool(b) => Value::bool(b, span),
nu_json::Value::F64(f) => Value::float(f, span),
nu_json::Value::I64(i) => {
if let Some(closure_str) = expand_closure(key.clone(), i, engine_state) {
Value::string(closure_str, span)
} else {
Value::int(i, span)
}
}
nu_json::Value::Null => Value::nothing(span),
nu_json::Value::Object(k) => Value::record(
k.into_iter()
.map(|(k, v)| {
let mut key = k.clone();
// Keep .Closure.val and .block_id as part of the key during conversion to value
let value = convert_nujson_to_value(Some(key.clone()), v, engine_state, span);
// Replace .Closure.val and .block_id from the key after the conversion
if key.contains(".Closure.val") || key.contains(".block_id") {
key = key.replace(".Closure.val", "").replace(".block_id", "");
}
(key, value)
})
.collect(),
span,
),
nu_json::Value::U64(u) => {
if u > i64::MAX as u64 {
Value::error(
ShellError::CantConvert {
to_type: "i64 sized integer".into(),
from_type: "value larger than i64".into(),
span,
help: None,
},
span,
)
} else if let Some(closure_str) = expand_closure(key.clone(), u as i64, engine_state) {
Value::string(closure_str, span)
} else {
Value::int(u as i64, span)
}
}
nu_json::Value::String(s) => Value::string(s, span),
}
}
// If the block_id is a real block id, then it should expand into the closure contents, otherwise return None
fn expand_closure(
key: Option<String>,
block_id: i64,
engine_state: &EngineState,
) -> Option<String> {
match key {
Some(key) if key.contains(".Closure.val") || key.contains(".block_id") => engine_state
.try_get_block(nu_protocol::BlockId::new(block_id as usize))
.and_then(|block| block.span)
.map(|span| {
let contents = engine_state.get_span_contents(span);
String::from_utf8_lossy(contents).to_string()
}),
_ => None,
}
}
// Converts row+column to a Span, assuming bytes (1-based rows)
fn convert_row_column_to_span(row: usize, col: usize, contents: &str) -> Span {
let mut cur_row = 1;
let mut cur_col = 1;
for (offset, curr_byte) in contents.bytes().enumerate() {
if curr_byte == b'\n' {
cur_row += 1;
cur_col = 1;
}
if cur_row >= row && cur_col >= col {
return Span::new(offset, offset);
} else {
cur_col += 1;
}
}
Span::new(contents.len(), contents.len())
}

View File

@ -1,7 +1,4 @@
use nu_cmd_base::util::get_editor;
use nu_engine::{command_prelude::*, env_to_strings};
use nu_protocol::{process::ChildProcess, ByteStream};
use nu_system::ForegroundChild;
use nu_engine::command_prelude::*;
#[derive(Clone)]
pub struct ConfigNu;
@ -17,10 +14,14 @@ impl Command for ConfigNu {
.input_output_types(vec![(Type::Nothing, Type::Any)])
.switch(
"default",
"Print default `config.nu` file instead.",
"Print the internal default `config.nu` file instead.",
Some('d'),
)
// TODO: Signature narrower than what run actually supports theoretically
.switch(
"doc",
"Print a commented `config.nu` with documentation instead.",
Some('s'),
)
}
fn description(&self) -> &str {
@ -30,18 +31,19 @@ impl Command for ConfigNu {
fn examples(&self) -> Vec<Example> {
vec![
Example {
description: "allow user to open and update nu config",
description: "open user's config.nu in the default editor",
example: "config nu",
result: None,
},
Example {
description: "allow user to print default `config.nu` file",
example: "config nu --default,",
description: "pretty-print a commented `config.nu` that explains common settings",
example: "config nu --doc | nu-highlight",
result: None,
},
Example {
description: "allow saving the default `config.nu` locally",
example: "config nu --default | save -f ~/.config/nushell/default_config.nu",
description:
"pretty-print the internal `config.nu` file which is loaded before user's config",
example: "config nu --default | nu-highlight",
result: None,
},
]
@ -54,66 +56,29 @@ impl Command for ConfigNu {
call: &Call,
_input: PipelineData,
) -> Result<PipelineData, ShellError> {
let default_flag = call.has_flag(engine_state, stack, "default")?;
let doc_flag = call.has_flag(engine_state, stack, "doc")?;
if default_flag && doc_flag {
return Err(ShellError::IncompatibleParameters {
left_message: "can't use `--default` at the same time".into(),
left_span: call.get_flag_span(stack, "default").expect("has flag"),
right_message: "because of `--doc`".into(),
right_span: call.get_flag_span(stack, "doc").expect("has flag"),
});
}
// `--default` flag handling
if call.has_flag(engine_state, stack, "default")? {
if default_flag {
let head = call.head;
return Ok(Value::string(nu_utils::get_default_config(), head).into_pipeline_data());
}
// Find the editor executable.
let (editor_name, editor_args) = get_editor(engine_state, stack, call.head)?;
let paths = nu_engine::env::path_str(engine_state, stack, call.head)?;
let cwd = engine_state.cwd(Some(stack))?;
let editor_executable = crate::which(&editor_name, &paths, cwd.as_ref()).ok_or(
ShellError::ExternalCommand {
label: format!("`{editor_name}` not found"),
help: "Failed to find the editor executable".into(),
span: call.head,
},
)?;
// `--doc` flag handling
if doc_flag {
let head = call.head;
return Ok(Value::string(nu_utils::get_doc_config(), head).into_pipeline_data());
}
let Some(config_path) = engine_state.get_config_path("config-path") else {
return Err(ShellError::GenericError {
error: "Could not find $nu.config-path".into(),
msg: "Could not find $nu.config-path".into(),
span: None,
help: None,
inner: vec![],
});
};
let config_path = config_path.to_string_lossy().to_string();
// Create the command.
let mut command = std::process::Command::new(editor_executable);
// Configure PWD.
command.current_dir(cwd);
// Configure environment variables.
let envs = env_to_strings(engine_state, stack)?;
command.env_clear();
command.envs(envs);
// Configure args.
command.arg(config_path);
command.args(editor_args);
// Spawn the child process. On Unix, also put the child process to
// foreground if we're in an interactive session.
#[cfg(windows)]
let child = ForegroundChild::spawn(command)?;
#[cfg(unix)]
let child = ForegroundChild::spawn(
command,
engine_state.is_interactive,
&engine_state.pipeline_externals_state,
)?;
// Wrap the output into a `PipelineData::ByteStream`.
let child = ChildProcess::new(child, None, false, call.head)?;
Ok(PipelineData::ByteStream(
ByteStream::child(child, call.head),
None,
))
super::config_::start_editor("config-path", engine_state, stack, call)
}
}

View File

@ -1,8 +1,11 @@
mod config_;
mod config_env;
mod config_flatten;
mod config_nu;
mod config_reset;
pub use config_::ConfigMeta;
pub use config_env::ConfigEnv;
pub use config_flatten::ConfigFlatten;
pub use config_nu::ConfigNu;
pub use config_reset::ConfigReset;

View File

@ -5,6 +5,7 @@ mod source_env;
mod with_env;
pub use config::ConfigEnv;
pub use config::ConfigFlatten;
pub use config::ConfigMeta;
pub use config::ConfigNu;
pub use config::ConfigReset;

View File

@ -103,3 +103,9 @@ fn is_root_impl() -> bool {
elevated
}
#[cfg(target_arch = "wasm32")]
fn is_root_impl() -> bool {
// in wasm we don't have a user system, so technically we are never root
false
}

View File

@ -1,4 +1,3 @@
use super::util::get_rest_for_glob_pattern;
use crate::{DirBuilder, DirInfo, FileInfo};
#[allow(deprecated)]
use nu_engine::{command_prelude::*, current_dir};
@ -13,8 +12,8 @@ pub struct Du;
#[derive(Deserialize, Clone, Debug)]
pub struct DuArgs {
path: Option<Spanned<NuGlob>>,
all: bool,
deref: bool,
long: bool,
exclude: Option<Spanned<NuGlob>>,
#[serde(rename = "max-depth")]
max_depth: Option<Spanned<i64>>,
@ -40,16 +39,16 @@ impl Command for Du {
SyntaxShape::OneOf(vec![SyntaxShape::GlobPattern, SyntaxShape::String]),
"Starting directory.",
)
.switch(
"all",
"Output file sizes as well as directory sizes",
Some('a'),
)
.switch(
"deref",
"Dereference symlinks to their targets for size",
Some('r'),
)
.switch(
"long",
"Get underlying directories and files for each entry",
Some('l'),
)
.named(
"exclude",
SyntaxShape::GlobPattern,
@ -95,13 +94,13 @@ impl Command for Du {
});
}
}
let all = call.has_flag(engine_state, stack, "all")?;
let deref = call.has_flag(engine_state, stack, "deref")?;
let long = call.has_flag(engine_state, stack, "long")?;
let exclude = call.get_flag(engine_state, stack, "exclude")?;
#[allow(deprecated)]
let current_dir = current_dir(engine_state, stack)?;
let paths = get_rest_for_glob_pattern(engine_state, stack, call, 0)?;
let paths = call.rest::<Spanned<NuGlob>>(engine_state, stack, 0)?;
let paths = if !call.has_positional_args(stack, 0) {
None
} else {
@ -112,8 +111,8 @@ impl Command for Du {
None => {
let args = DuArgs {
path: None,
all,
deref,
long,
exclude,
max_depth,
min_size,
@ -128,8 +127,8 @@ impl Command for Du {
for p in paths {
let args = DuArgs {
path: Some(p),
all,
deref,
long,
exclude: exclude.clone(),
max_depth,
min_size,
@ -175,7 +174,6 @@ fn du_for_one_pattern(
})
})?;
let include_files = args.all;
let mut paths = match args.path {
Some(p) => nu_engine::glob_from(&p, current_dir, span, None),
// The * pattern should never fail.
@ -189,17 +187,10 @@ fn du_for_one_pattern(
None,
),
}
.map(|f| f.1)?
.filter(move |p| {
if include_files {
true
} else {
matches!(p, Ok(f) if f.is_dir())
}
});
.map(|f| f.1)?;
let all = args.all;
let deref = args.deref;
let long = args.long;
let max_depth = args.max_depth.map(|f| f.item as u64);
let min_size = args.min_size.map(|f| f.item as u64);
@ -208,7 +199,7 @@ fn du_for_one_pattern(
min: min_size,
deref,
exclude,
all,
long,
};
let mut output: Vec<Value> = vec![];
@ -217,7 +208,7 @@ fn du_for_one_pattern(
Ok(a) => {
if a.is_dir() {
output.push(DirInfo::new(a, &params, max_depth, span, signals)?.into());
} else if let Ok(v) = FileInfo::new(a, deref, span) {
} else if let Ok(v) = FileInfo::new(a, deref, span, params.long) {
output.push(v.into());
}
}

View File

@ -1,5 +1,5 @@
use nu_engine::command_prelude::*;
use nu_protocol::Signals;
use nu_protocol::{ListStream, Signals};
use wax::{Glob as WaxGlob, WalkBehavior, WalkEntry};
#[derive(Clone)]
@ -223,6 +223,7 @@ impl Command for Glob {
..Default::default()
},
)
.into_owned()
.not(np)
.map_err(|err| ShellError::GenericError {
error: "error with glob's not pattern".into(),
@ -249,6 +250,7 @@ impl Command for Glob {
..Default::default()
},
)
.into_owned()
.flatten();
glob_to_value(
engine_state.signals(),
@ -258,11 +260,9 @@ impl Command for Glob {
no_symlinks,
span,
)
}?;
};
Ok(result
.into_iter()
.into_pipeline_data(span, engine_state.signals().clone()))
Ok(result.into_pipeline_data(span, engine_state.signals().clone()))
}
}
@ -281,29 +281,33 @@ fn convert_patterns(columns: &[Value]) -> Result<Vec<String>, ShellError> {
Ok(res)
}
fn glob_to_value<'a>(
fn glob_to_value(
signals: &Signals,
glob_results: impl Iterator<Item = WalkEntry<'a>>,
glob_results: impl Iterator<Item = WalkEntry<'static>> + Send + 'static,
no_dirs: bool,
no_files: bool,
no_symlinks: bool,
span: Span,
) -> Result<Vec<Value>, ShellError> {
let mut result: Vec<Value> = Vec::new();
for entry in glob_results {
signals.check(span)?;
) -> ListStream {
let map_signals = signals.clone();
let result = glob_results.filter_map(move |entry| {
if let Err(err) = map_signals.check(span) {
return Some(Value::error(err, span));
};
let file_type = entry.file_type();
if !(no_dirs && file_type.is_dir()
|| no_files && file_type.is_file()
|| no_symlinks && file_type.is_symlink())
{
result.push(Value::string(
Some(Value::string(
entry.into_path().to_string_lossy().to_string(),
span,
));
))
} else {
None
}
}
});
Ok(result)
ListStream::new(result, span, signals.clone())
}

View File

@ -1,4 +1,3 @@
use super::util::get_rest_for_glob_pattern;
use crate::{DirBuilder, DirInfo};
use chrono::{DateTime, Local, LocalResult, TimeZone, Utc};
use nu_engine::glob_from;
@ -9,13 +8,12 @@ use nu_path::{expand_path_with, expand_to_real_path};
use nu_protocol::{DataSource, NuGlob, PipelineMetadata, Signals};
use pathdiff::diff_paths;
use rayon::prelude::*;
#[cfg(unix)]
use std::os::unix::fs::PermissionsExt;
use std::{
cmp::Ordering,
path::PathBuf,
sync::mpsc,
sync::{Arc, Mutex},
sync::{mpsc, Arc, Mutex},
time::{SystemTime, UNIX_EPOCH},
};
@ -114,7 +112,7 @@ impl Command for Ls {
call_span,
};
let pattern_arg = get_rest_for_glob_pattern(engine_state, stack, call, 0)?;
let pattern_arg = call.rest::<Spanned<NuGlob>>(engine_state, stack, 0)?;
let input_pattern_arg = if !call.has_positional_args(stack, 0) {
None
} else {
@ -287,28 +285,10 @@ fn ls_for_one_pattern(
nu_path::expand_path_with(pat.item.as_ref(), &cwd, pat.item.is_expand());
// Avoid checking and pushing "*" to the path when directory (do not show contents) flag is true
if !directory && tmp_expanded.is_dir() {
if permission_denied(&tmp_expanded) {
#[cfg(unix)]
let error_msg = format!(
"The permissions of {:o} do not allow access for this user",
tmp_expanded
.metadata()
.expect("this shouldn't be called since we already know there is a dir")
.permissions()
.mode()
& 0o0777
);
#[cfg(not(unix))]
let error_msg = String::from("Permission denied");
return Err(ShellError::GenericError {
error: "Permission denied".into(),
msg: error_msg,
span: Some(p_tag),
help: None,
inner: vec![],
});
}
if is_empty_dir(&tmp_expanded) {
if read_dir(&tmp_expanded, p_tag, use_threads)?
.next()
.is_none()
{
return Ok(Value::test_nothing().into_pipeline_data());
}
just_read_dir = !(pat.item.is_expand() && pat.item.as_ref().contains(GLOB_CHARS));
@ -327,7 +307,7 @@ fn ls_for_one_pattern(
// Avoid pushing "*" to the default path when directory (do not show contents) flag is true
if directory {
(NuGlob::Expand(".".to_string()), false)
} else if is_empty_dir(&cwd) {
} else if read_dir(&cwd, p_tag, use_threads)?.next().is_none() {
return Ok(Value::test_nothing().into_pipeline_data());
} else {
(NuGlob::Expand("*".to_string()), false)
@ -339,7 +319,7 @@ fn ls_for_one_pattern(
let path = pattern_arg.into_spanned(p_tag);
let (prefix, paths) = if just_read_dir {
let expanded = nu_path::expand_path_with(path.item.as_ref(), &cwd, path.item.is_expand());
let paths = read_dir(&expanded)?;
let paths = read_dir(&expanded, p_tag, use_threads)?;
// just need to read the directory, so prefix is path itself.
(Some(expanded), paths)
} else {
@ -492,20 +472,6 @@ fn ls_for_one_pattern(
.into_pipeline_data(call_span, signals.clone()))
}
fn permission_denied(dir: impl AsRef<Path>) -> bool {
match dir.as_ref().read_dir() {
Err(e) => matches!(e.kind(), std::io::ErrorKind::PermissionDenied),
Ok(_) => false,
}
}
fn is_empty_dir(dir: impl AsRef<Path>) -> bool {
match dir.as_ref().read_dir() {
Err(_) => true,
Ok(mut s) => s.next().is_none(),
}
}
fn is_hidden_dir(dir: impl AsRef<Path>) -> bool {
#[cfg(windows)]
{
@ -979,10 +945,37 @@ mod windows_helper {
#[allow(clippy::type_complexity)]
fn read_dir(
f: &Path,
span: Span,
use_threads: bool,
) -> Result<Box<dyn Iterator<Item = Result<PathBuf, ShellError>> + Send>, ShellError> {
let iter = f.read_dir()?.map(|d| {
d.map(|r| r.path())
.map_err(|e| ShellError::IOError { msg: e.to_string() })
});
Ok(Box::new(iter))
let items = f
.read_dir()
.map_err(|error| {
if error.kind() == std::io::ErrorKind::PermissionDenied {
return ShellError::GenericError {
error: "Permission denied".into(),
msg: "The permissions may not allow access for this user".into(),
span: Some(span),
help: None,
inner: vec![],
};
}
error.into()
})?
.map(|d| {
d.map(|r| r.path())
.map_err(|e| ShellError::IOError { msg: e.to_string() })
});
if !use_threads {
let mut collected = items.collect::<Vec<_>>();
collected.sort_by(|a, b| match (a, b) {
(Ok(a), Ok(b)) => a.cmp(b),
(Ok(_), Err(_)) => Ordering::Greater,
(Err(_), Ok(_)) => Ordering::Less,
(Err(_), Err(_)) => Ordering::Equal,
});
return Ok(Box::new(collected.into_iter()));
}
Ok(Box::new(items))
}

View File

@ -12,6 +12,7 @@ mod ucp;
mod umkdir;
mod umv;
mod util;
mod utouch;
mod watch;
pub use self::open::Open;
@ -27,4 +28,5 @@ pub use touch::Touch;
pub use ucp::UCp;
pub use umkdir::UMkdir;
pub use umv::UMv;
pub use utouch::UTouch;
pub use watch::Watch;

View File

@ -1,7 +1,6 @@
use super::util::get_rest_for_glob_pattern;
#[allow(deprecated)]
use nu_engine::{command_prelude::*, current_dir, get_eval_block};
use nu_protocol::{ast, ByteStream, DataSource, NuGlob, PipelineMetadata};
use nu_protocol::{ast, DataSource, NuGlob, PipelineMetadata};
use std::path::Path;
#[cfg(feature = "sqlite")]
@ -53,7 +52,7 @@ impl Command for Open {
let call_span = call.head;
#[allow(deprecated)]
let cwd = current_dir(engine_state, stack)?;
let mut paths = get_rest_for_glob_pattern(engine_state, stack, call, 0)?;
let mut paths = call.rest::<Spanned<NuGlob>>(engine_state, stack, 0)?;
let eval_block = get_eval_block(engine_state);
if paths.is_empty() && !call.has_positional_args(stack, 0) {
@ -146,6 +145,9 @@ impl Command for Open {
}
};
// Assigning content type should only happen in raw mode. Otherwise, the content
// will potentially be in one of the built-in nushell `from xxx` formats and therefore
// cease to be in the original content-type.... or so I'm told. :)
let content_type = if raw {
path.extension()
.map(|ext| ext.to_string_lossy().to_string())
@ -283,6 +285,9 @@ fn detect_content_type(extension: &str) -> Option<String> {
match extension {
// Per RFC-9512, application/yaml should be used
"yaml" | "yml" => Some("application/yaml".to_string()),
"nu" => Some("application/x-nuscript".to_string()),
"json" | "jsonl" | "ndjson" => Some("application/json".to_string()),
"nuon" => Some("application/x-nuon".to_string()),
_ => mime_guess::from_ext(extension)
.first()
.map(|mime| mime.to_string()),

View File

@ -1,4 +1,4 @@
use super::util::{get_rest_for_glob_pattern, try_interaction};
use super::util::try_interaction;
#[allow(deprecated)]
use nu_engine::{command_prelude::*, env::current_dir};
use nu_glob::MatchOptions;
@ -118,7 +118,7 @@ fn rm(
let interactive = call.has_flag(engine_state, stack, "interactive")?;
let interactive_once = call.has_flag(engine_state, stack, "interactive-once")? && !interactive;
let mut paths = get_rest_for_glob_pattern(engine_state, stack, call, 0)?;
let mut paths = call.rest::<Spanned<NuGlob>>(engine_state, stack, 0)?;
if paths.is_empty() {
return Err(ShellError::MissingParameter {

Some files were not shown because too many files have changed in this diff Show More