Compare commits

...

231 Commits

Author SHA1 Message Date
f93c6680bd Bump to 0.95.0 (#13221)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-06-25 21:29:47 +03:00
43e349a274 Mitigate the poor interaction between ndots expansion and non-path strings (#13218)
# Description

@hustcer reported that slashes were disappearing from external args
since #13089:

```
$> ossutil ls oss://abc/b/c
Error: invalid cloud url: "oss:/abc/b/c", please make sure the url starts with: "oss://"

$> ossutil ls 'oss://abc/b/c'
Error: oss: service returned error: StatusCode=403, ErrorCode=UserDisable, ErrorMessage="UserDisable", RequestId=66791EDEFE87B73537120838, Ec=0003-00000801, Bucket=abc, Object=
```

I narrowed this down to the ndots handling, since that does path parsing
and path reconstruction in every case. I decided to change that so that
it only activates if the string contains at least `...`, since that
would be the minimum trigger for ndots, and also to not activate it if
the string contains `://`, since it's probably undesirable for a URL.

Kind of a hack, but I'm not really sure how else we decide whether
someone wants ndots or not.

# User-Facing Changes
- bare strings not containing ndots are not modified
- bare strings containing `://` are not modified

# Tests + Formatting
Added tests to prevent regression.
2024-06-24 16:39:01 -07:00
4509944988 Fix usage parsing for commands defined in CRLF (windows) files (#13212)
Fixes nushell/nushell#13207

# Description
This fixes the parsing of command usage when that command comes from a
file with CRLF line endings.

See nushell/nushell#13207 for more details.

# User-Facing Changes
Users on Windows will get correct autocompletion for `std` commands.
2024-06-23 18:43:05 -05:00
9b7f899410 Allow missing fields in derived FromValue::from_value calls (#13206)
# Description
In #13031 I added the derive macros for `FromValue` and `IntoValue`. In
that implementation, in particular for structs with named fields, it was
not possible to omit fields while loading them from a value, when the
field is an `Option`. This PR adds extra handling for this behavior, so
if a field is an `Option` and that field is missing in the `Value`, then
the field becomes `None`. This behavior is also tested in
`nu_protocol::value::test_derive::missing_options`.

# User-Facing Changes
When using structs for options or similar, users can now just emit
fields in the record and the derive `from_value` method will be able to
understand this, if the struct has an `Option` type for that field.

# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting
A showcase for this feature would be great, I tried to use the current
derive macro in a plugin of mine for a config but without this addition,
they are annoying to use. So, when this is done, I would add an example
for such plugin configs that may be loaded via `FromValue`.
2024-06-22 13:31:09 -07:00
b6bdadbc6f Suppress column index for default cal output (#13188)
# Description

* As discussed in the comments in #11954, this suppresses the index
column on `cal` output. It does that by running `table -i false` on the
results by default.
* Added new `--as-table/-t` flag to revert to the old behavior and
output the calendar as structured data
* Updated existing tests to use `--as-table`
* Added new tests against the string output
* Updated `length` test which also used `cal`
* Added new example for `--as-table`, with result

# User-Facing Changes

## Breaking change

The *default* `cal` output has changed from a `list` to a `string`. To
obtain structured data from `cal`, use the new `--as-table/-t` flag.

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`


# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-06-22 07:41:29 -05:00
dcb6ab6370 Fixed generate command signature (#13200)
# Description

Removes `list<any>` as an input type for the `generate` command. This
command does not accept pipeline input (and cannot, logically). This can
be seen by the use of `_input` in the command's `run()`.

Also, due to #13199, in order to pass `toolkit check pr`, one of the
examples was changed to remove the `result`. This is probably a better
demonstration of the ability of the command to infinitely generate a
list anyway, and an infinite list can't be represented in a `result`.

# User-Facing Changes

Should only be a change to the help. The input type was never valid and
couldn't have been used.

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
2024-06-22 07:37:34 -05:00
db86dd9f26 Polars default infer (#13193)
Addresses performance issues that @maxim-uvarov found with CSV and JSON
lines.

This ensures that the schema inference follows the polars defaults of
100 lines. Recent changes caused the default values to be override and
caused the entire file to be scanned when inferring the schema.
2024-06-22 07:23:42 -05:00
10e84038af nu-explore: Add vertical lines && fix index/transpose issue (#13147)
Somehow I believe that split lines were implemented originally; (I
haven't got to find it though; from a quick look)
I mean a long time ago before a lot a changes were made.

Probably adding horizontal lines would make also some sense.

ref #13116
close #13140

Take care

________________

If `explore` is used, frequently, or planned to be so.
I guess it would be a good one to create a test suite for it; to not
break things occasionally 😅

I did approached it one time back then using `expectrl` (literally
`expect`), but there was some issues.
Maybe smth. did change.
Or some `clean` mode could be introduced for it, to being able to be
used by outer programs; to control `nu`.

Just thoughts here.
2024-06-21 12:07:57 -07:00
91d44f15c1 Allow plugins to report their own version and store it in the registry (#12883)
# Description

This allows plugins to report their version (and potentially other
metadata in the future). The version is shown in `plugin list` and in
`version`.

The metadata is stored in the registry file, and reflects whatever was
retrieved on `plugin add`, not necessarily the running binary. This can
help you to diagnose if there's some kind of mismatch with what you
expect. We could potentially use this functionality to show a warning or
error if a plugin being run does not have the same version as what was
in the cache file, suggesting `plugin add` be run again, but I haven't
done that at this point.

It is optional, and it requires the plugin author to make some code
changes if they want to provide it, since I can't automatically
determine the version of the calling crate or anything tricky like that
to do it.

Example:

```
> plugin list | select name version is_running pid
╭───┬────────────────┬─────────┬────────────┬─────╮
│ # │      name      │ version │ is_running │ pid │
├───┼────────────────┼─────────┼────────────┼─────┤
│ 0 │ example        │ 0.93.1  │ false      │     │
│ 1 │ gstat          │ 0.93.1  │ false      │     │
│ 2 │ inc            │ 0.93.1  │ false      │     │
│ 3 │ python_example │ 0.1.0   │ false      │     │
╰───┴────────────────┴─────────┴────────────┴─────╯
```

cc @maxim-uvarov (he asked for it)

# User-Facing Changes

- `plugin list` gets a `version` column
- `version` shows plugin versions when available
- plugin authors *should* add `fn metadata()` to their `impl Plugin`,
but don't have to

# Tests + Formatting

Tested the low level stuff and also the `plugin list` column.

# After Submitting
- [ ] update plugin guide docs
- [ ] update plugin protocol docs (`Metadata` call & response)
- [ ] update plugin template (`fn metadata()` should be easy)
- [ ] release notes
2024-06-21 06:27:09 -05:00
dd8f8861ed Add shape_glob_interpolation to default_config.nu (#13198)
# Description

Just missed this during #13089. Adds `shape_glob_interpolation` to the
config.

This actually isn't really going to be seen at all yet, so I debated
whether it's really needed at all. It's only used to highlight the
quotes themselves, and we don't have any quoted glob interpolations at
the moment.
2024-06-21 06:17:29 -05:00
9845d13347 fix nu-system build on arm64 FreeBSD (#13196)
# Description

Fixes #13194

`ki_stat` is supposed to be a `c_char`, but was defined was `i8`.
Unfortunately, `c_char` is `u8` on Aarch64 (on all platforms), so this
doesn't compile. I fixed it to use `c_char` instead.

Double checked whether NetBSD is affected, but the `libc` code defines
it as `i8` for some reason (erroneously, really) but that doesn't matter
too much. Anyway should be ok there.

Confirmed to be working.
2024-06-21 03:03:10 -07:00
4c82a748c1 Do example (#13190)
# Description

#12056 added support for default and type-checked arguments in `do`
closures.

This PR adds examples for those features.  It also:

* Fixes the TODO (a closure parameter that wasn't being used) that was
preventing a result from being added
* Removes extraneous commas from the descriptions
* Adds an example demonstrating multiple positional closure arguments

# User-Facing Changes

Help examples only

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-06-20 18:46:56 -05:00
20834c9d47 Added the ability to turn on performance debugging through and env var for the polars plugin (#13191)
This allows performance debugging to be turned on by setting:

```nushell
$env.POLARS_PLUGIN_PERF = "true"
```

Furthermore, this improves the other plugin debugging by allowing the
env variable for debugging to be set at any time versus having to be
available when nushell is launched:

```nushell
$env.POLARS_PLUGIN_DEBUG = "true"
```

This plugin introduces a `perf` function that will output timing
results. This works very similar to the perf function available in
nu_utils::utils::perf. This version prints everything to std error to
not break the plugin stream and uses the engine interface to see if the
env variable is configured.

This pull requests uses this `perf` function when:
* opening csv files as dataframes
* opening json lines files as dataframes

This will hopefully help provide some more fine grained information on
how long it takes polars to open different dataframes. The `perf` can
also be utilized later for other dataframes use cases.
2024-06-20 16:37:38 -07:00
7d2d573eb8 Added the ability to open json lines dataframes with polars lazy json lines reader. (#13167)
The `--lazy` flag will now use the polars' LazyJsonLinesReader when
opening a json lines file with `polars open`
2024-06-20 10:55:49 -07:00
c09a8a5ec9 add a system level folder for future autoloading (#13180)
# Description

This PR adds a directory to the `$nu` constant that shows where the
system level autoload directory is located at. This folder is modifiable
at compile time with environment variables.
```rust
    // Create a system level directory for nushell scripts, modules, completions, etc
    // that can be changed by setting the NU_VENDOR_AUTOLOAD_DIR env var on any platform
    // before nushell is compiled OR if NU_VENDOR_AUTOLOAD_DIR is not set for non-windows 
    // systems, the PREFIX env var can be set before compile and used as PREFIX/nushell/vendor/autoload

    // pseudo code
    // if env var NU_VENDOR_AUTOLOAD_DIR is set, in any platform, use it
    // if not, if windows, use ALLUSERPROFILE\nushell\vendor\autoload
    // if not, if non-windows, if env var PREFIX is set, use PREFIX/share/nushell/vendor/autoload
    // if not, use the default /usr/share/nushell/vendor/autoload
```

### Windows default
```nushell
❯ $nu.vendor-autoload-dir
C:\ProgramData\nushell\vendor\autoload
```
### Non-Windows default
```nushell
❯ $nu.vendor-autoload-dir
/usr/local/share/nushell/vendor/autoload
```
### Non-Windows with PREFIX set
```nushell
❯ PREFIX=/usr/bob cargo r
❯ $nu.vendor-autoload-dir
/usr/bob/share/nushell/vendor/autoload
```
### Non-Windows with NU_VENDOR_AUTOLOAD_DIR set
```nushell
❯ NU_VENDOR_AUTOLOAD_DIR=/some/other/path/nushell/stuff cargo r
❯ $nu.vendor-autoload-dir
/some/other/path/nushell/stuff
```

> [!IMPORTANT] 
To be clear, this PR does not do the auto-loading, it just sets up the
folder to support that functionality that can be added in a later PR.
The PR also does not create the folder defined. It's just setting the
$nu constant.
 
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-06-20 06:30:43 -05:00
bb6cb94e55 Bump actions/checkout from 4.1.6 to 4.1.7 (#13177)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.1.6
to 4.1.7.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/releases">actions/checkout's
releases</a>.</em></p>
<blockquote>
<h2>v4.1.7</h2>
<h2>What's Changed</h2>
<ul>
<li>Bump the minor-npm-dependencies group across 1 directory with 4
updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1739">actions/checkout#1739</a></li>
<li>Bump actions/checkout from 3 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1697">actions/checkout#1697</a></li>
<li>Check out other refs/* by commit by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1774">actions/checkout#1774</a></li>
<li>Pin actions/checkout's own workflows to a known, good, stable
version. by <a href="https://github.com/jww3"><code>@​jww3</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1776">actions/checkout#1776</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/orhantoy"><code>@​orhantoy</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/checkout/pull/1774">actions/checkout#1774</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v4.1.6...v4.1.7">https://github.com/actions/checkout/compare/v4.1.6...v4.1.7</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/blob/main/CHANGELOG.md">actions/checkout's
changelog</a>.</em></p>
<blockquote>
<h2>v4.1.7</h2>
<ul>
<li>Bump the minor-npm-dependencies group across 1 directory with 4
updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1739">actions/checkout#1739</a></li>
<li>Bump actions/checkout from 3 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1697">actions/checkout#1697</a></li>
<li>Check out other refs/* by commit by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1774">actions/checkout#1774</a></li>
<li>Pin actions/checkout's own workflows to a known, good, stable
version. by <a href="https://github.com/jww3"><code>@​jww3</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1776">actions/checkout#1776</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="692973e3d9"><code>692973e</code></a>
Prepare 4.1.7 release (<a
href="https://redirect.github.com/actions/checkout/issues/1775">#1775</a>)</li>
<li><a
href="6ccd57f4c5"><code>6ccd57f</code></a>
Pin actions/checkout's own workflows to a known, good, stable version.
(<a
href="https://redirect.github.com/actions/checkout/issues/1776">#1776</a>)</li>
<li><a
href="b17fe1e4d5"><code>b17fe1e</code></a>
Handle hidden refs (<a
href="https://redirect.github.com/actions/checkout/issues/1774">#1774</a>)</li>
<li><a
href="b80ff79f17"><code>b80ff79</code></a>
Bump actions/checkout from 3 to 4 (<a
href="https://redirect.github.com/actions/checkout/issues/1697">#1697</a>)</li>
<li><a
href="b1ec3021b8"><code>b1ec302</code></a>
Bump the minor-npm-dependencies group across 1 directory with 4 updates
(<a
href="https://redirect.github.com/actions/checkout/issues/1739">#1739</a>)</li>
<li>See full diff in <a
href="https://github.com/actions/checkout/compare/v4.1.6...v4.1.7">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/checkout&package-manager=github_actions&previous-version=4.1.6&new-version=4.1.7)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-20 16:10:39 +08:00
26bdba2068 Bump interprocess from 2.1.0 to 2.2.0 (#13178)
Bumps [interprocess](https://github.com/kotauskas/interprocess) from
2.1.0 to 2.2.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/kotauskas/interprocess/releases">interprocess's
releases</a>.</em></p>
<blockquote>
<h2>2.2.0 – Tokio unnamed pipes</h2>
<ul>
<li>Tokio-based unnamed pipes, with subpar performance on Windows due to
OS API limitations</li>
<li>Examples for unnamed pipes, both non-async and Tokio</li>
<li>Impersonation for Windows named pipes</li>
<li>Improvements to the implementation of Windows pipe flushing on
Tokio</li>
</ul>
<h2>2.1.1</h2>
<ul>
<li>Removed async <code>Incoming</code> and <code>futures::Stream</code>
(&quot;<code>AsyncIterator</code>&quot;) implementations on
<code>local_socket::traits::Listener</code> implementors – those were
actually completely broken, so this change is not breaking in practice
and thus does not warrant a bump to 3.0.0</li>
<li>Fixed <code>ListenerOptionsExt::mode()</code> behavior in
<code>umask</code> fallback mode and improved its documentation</li>
<li>Moved examples to their own dedicated files with the help of the <a
href="https://crates.io/crates/doctest-file"><code>doctest-file</code></a>
crate</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="050ae2e9dd"><code>050ae2e</code></a>
Adjust unnamed pipe examples</li>
<li><a
href="5bcd6694e9"><code>5bcd669</code></a>
Add named pipe impersonation</li>
<li><a
href="0735668a5e"><code>0735668</code></a>
Add <code>peek_msg_len()</code></li>
<li><a
href="316d130a85"><code>316d130</code></a>
Don't elide flush for from-handle conversion</li>
<li><a
href="0b1d1ac8b7"><code>0b1d1ac</code></a>
Crackhead specialization</li>
<li><a
href="2315ee1de7"><code>2315ee1</code></a>
Adjust TODOs</li>
<li><a
href="cba79cf317"><code>cba79cf</code></a>
Improve <code>Debug</code> of local socket halves</li>
<li><a
href="d80e871cd3"><code>d80e871</code></a>
nah</li>
<li><a
href="9a96e58a0a"><code>9a96e58</code></a>
Tokio unnamed pipe examples</li>
<li><a
href="30fa27afc2"><code>30fa27a</code></a>
Handle conversions for Windows Tokio unnamed pipes</li>
<li>Additional commits viewable in <a
href="https://github.com/kotauskas/interprocess/compare/2.1.0...2.2.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=interprocess&package-manager=cargo&previous-version=2.1.0&new-version=2.2.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-20 16:10:27 +08:00
bdc32345bd Move most of the peculiar argument handling for external calls into the parser (#13089)
# Description

We've had a lot of different issues and PRs related to arg handling with
externals since the rewrite of `run-external` in #12921:

- #12950
- #12955
- #13000
- #13001
- #13021
- #13027
- #13028
- #13073

Many of these are caused by the argument handling of external calls and
`run-external` being very special and involving the parser handing
quoted strings over to `run-external` so that it knows whether to expand
tildes and globs and so on. This is really unusual and also makes it
harder to use `run-external`, and also harder to understand it (and
probably is part of the reason why it was rewritten in the first place).

This PR moves a lot more of that work over to the parser, so that by the
time `run-external` gets it, it's dealing with much more normal Nushell
values. In particular:

- Unquoted strings are handled as globs with no expand
- The unescaped-but-quoted handling of strings was removed, and the
parser constructs normal looking strings instead, removing internal
quotes so that `run-external` doesn't have to do it
- Bare word interpolation is now supported and expansion is done in this
case
- Expressions typed as `Glob` containing `Expr::StringInterpolation` now
produce `Value::Glob` instead, with the quoted status from the expr
passed through so we know if it was a bare word
- Bare word interpolation for values typed as `glob` now possible, but
not implemented
- Because expansion is now triggered by `Value::Glob(_, false)` instead
of looking at the expr, externals now support glob types

# User-Facing Changes

- Bare word interpolation works for external command options, and
otherwise embedded in other strings:
  ```nushell
  ^echo --foo=(2 + 2) # prints --foo=4
  ^echo -foo=$"(2 + 2)" # prints -foo=4
  ^echo foo="(2 + 2)" # prints (no interpolation!) foo=(2 + 2)
  ^echo foo,(2 + 2),bar # prints foo,4,bar
  ```

- Bare word interpolation expands for external command head/args:
  ```nushell
  let name = "exa"
  ~/.cargo/bin/($name) # this works, and expands the tilde
  ^$"~/.cargo/bin/($name)" # this doesn't expand the tilde
  ^echo ~/($name)/* # this glob is expanded
  ^echo $"~/($name)/*" # this isn't expanded
  ```

- Ndots are now supported for the head of an external command
(`^.../foo` works)

- Glob values are now supported for head/args of an external command,
and expanded appropriately:
  ```nushell
  ^("~/.cargo/bin/exa" | into glob) # the tilde is expanded
  ^echo ("*.txt" | into glob) # this glob is expanded
  ```

- `run-external` now works more like any other command, without
expecting a special call convention
  for its args:
  ```nushell
  run-external echo "'foo'"
  # before PR: 'foo'
  # after PR:  foo
  run-external echo "*.txt"
  # before PR: (glob is expanded)
  # after PR:  *.txt
  ```

# Tests + Formatting
Lots of tests added and cleaned up. Some tests that weren't active on
Windows changed to use `nu --testbin cococo` so that they can work.
Added a test for Linux only to make sure tilde expansion of commands
works, because changing `HOME` there causes `~` to reliably change.

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting
- [ ] release notes: make sure to mention the new syntaxes that are
supported
2024-06-19 21:00:03 -07:00
44aa0a2de4 Table help rendering (#13182)
# Description

Mostly fixes #13149 with much of the credit to @fdncred.

This PR runs `table --expand` against `help` example results. This is
essentially the same fix that #13146 was for `std help`.

It also changes the shape of the result for the `table --expand`
example, as it was hardcoded wrong.

~Still needed is a fix for the `table --collapse` example.~ Note that
this is also still a bug in `std help` that I didn't noticed before.

# User-Facing Changes

Certain tables are now rendered correctly in the help examples for:

* `table`
* `zip`
* `flatten`
* And almost certainly others

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting

---------

Co-authored-by: Darren Schroeder <343840+fdncred@users.noreply.github.com>
2024-06-19 20:12:25 -05:00
12991cd36f Change the error style during tests to plain (#13061)
# Description

This fixes issues with trying to run the tests with a terminal that is
small enough to cause errors to wrap around, or in cases where the test
environment might produce strings that are reasonably expected to wrap
around anyway. "Fancy" errors are too fancy for tests to work
predictably 😉

cc @abusch

# User-Facing Changes

- Added `--error-style` option for use with `--commands` (like
`--table-mode`)

# Tests + Formatting

Surprisingly, all of the tests pass, including in small windows! I only
had to make one change to a test for `error make` which was looking for
the box drawing characters miette uses to determine whether the span
label was showing up - but the plain error style output is even better
and easier to match on, so this test is actually more specific now.
2024-06-18 21:37:24 -07:00
103f59be52 Bump git2 from 0.18.3 to 0.19.0 (#13179) 2024-06-19 01:09:25 +00:00
532c0023c2 Bump crate-ci/typos from 1.22.4 to 1.22.7 (#13176) 2024-06-19 00:59:03 +00:00
a894e9c246 update try command's help (#13173)
# Description

This PR updates the `try` command to show that `catch` is a closure and
can be used as such.

### Before

![image](https://github.com/nushell/nushell/assets/343840/dc330b10-cd68-4d70-9ff8-aa1e7cbda5f3)

### After

![image](https://github.com/nushell/nushell/assets/343840/146a7514-6026-4b53-bdf0-603c77c8a259)


# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-06-18 13:48:06 -05:00
c171a2b8b7 Update sys users signature (#13172)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->
Fixes #13171 

# Description
Corrects the `sys users` signature to match the returned type.

Before this change `sys users | where name == root` would result in a
type error.
 
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-06-18 10:34:18 -05:00
28ed0fe700 Improves commands that support range input (#13113)
# Description
Fixes: #13105
Fixes: #13077

This pr makes `str substring`, `bytes at` work better with negative
index.

And it also fixes the false range semantic on `detect columns -c` in
some cases.

# User-Facing Changes
For `str substring`, `bytes at`, it will no-longer return an error if
start index is larger than end index. It makes sense to return an empty
string of empty bytes directly.

### Before
```nushell
# str substring
❯ ("aaa" | str substring 2..-3) == ""
Error: nu:🐚:type_mismatch

  × Type mismatch.
   ╭─[entry #23:1:10]
 1 │ ("aaa" | str substring 2..-3) == ""
   ·          ──────┬──────
   ·                ╰── End must be greater than or equal to Start
 2 │ true
   ╰────

# bytes at
❯ ("aaa" | encode utf-8 | bytes at 2..-3) == ("" | encode utf-8)
Error: nu:🐚:type_mismatch

  × Type mismatch.
   ╭─[entry #27:1:25]
 1 │ ("aaa" | encode utf-8 | bytes at 2..-3) == ("" | encode utf-8)
   ·                         ────┬───
   ·                             ╰── End must be greater than or equal to Start
   ╰────
```
### After
```nushell
# str substring
❯ ("aaa" | str substring 2..-3) == ""
true

# bytes at
❯  ("aaa" | encode utf-8 | bytes at 2..-3) == ("" | encode utf-8)
true
```
# Tests + Formatting
Added some tests, adjust existing tests
2024-06-18 07:19:13 -05:00
ae6489f04b Update README.md (#13157)
Add x-cmd in the support list.

# Description

We like nushell, so we manage to add x-cmd support in the nushell. 
Here is our demo. 

https://www.x-cmd.com/mod/nu

Thank you.
2024-06-18 07:16:42 -05:00
97876fb0b4 Fix compilation for nu_protocol::value::from_value on 32-bit targets (#13169)
# Description

Needed to cast `usize::MAX` to `i64` for it to compile properly

cc @cptpiepmatz
2024-06-18 07:16:08 -05:00
b79a2255d2 Add derive macros for FromValue and IntoValue to ease the use of Values in Rust code (#13031)
# Description
After discussing with @sholderbach the cumbersome usage of
`nu_protocol::Value` in Rust, I created a derive macro to simplify it.
I’ve added a new crate called `nu-derive-value`, which includes two
macros, `IntoValue` and `FromValue`. These are re-exported in
`nu-protocol` and should be encouraged to be used via that re-export.

The macros ensure that all types can easily convert from and into
`Value`. For example, as a plugin author, you can define your plugin
configuration using a Rust struct and easily convert it using
`FromValue`. This makes plugin configuration less of a hassle.

I introduced the `IntoValue` trait for a standardized approach to
converting values into `Value` (and a fallible variant `TryIntoValue`).
This trait could potentially replace existing `into_value` methods.
Along with this, I've implemented `FromValue` for several standard types
and refined other implementations to use blanket implementations where
applicable.

I made these design choices with input from @devyn.

There are more improvements possible, but this is a solid start and the
PR is already quite substantial.

# User-Facing Changes

For `nu-protocol` users, these changes simplify the handling of
`Value`s. There are no changes for end-users of nushell itself.

# Tests + Formatting
Documenting the macros itself is not really possible, as they cannot
really reference any other types since they are the root of the
dependency graph. The standard library has the same problem
([std::Debug](https://doc.rust-lang.org/stable/std/fmt/derive.Debug.html)).
However I documented the `FromValue` and `IntoValue` traits completely.

For testing, I made of use `proc-macro2` in the derive macro code. This
would allow testing the generated source code. Instead I just tested
that the derived functionality is correct. This is done in
`nu_protocol::value::test_derive`, as a consumer of `nu-derive-value`
needs to do the testing of the macro usage. I think that these tests
should provide a stable baseline so that users can be sure that the impl
works.

# After Submitting
With these macros available, we can probably use them in some examples
for plugins to showcase the use of them.
2024-06-17 16:05:11 -07:00
3a6d8aac0b Return an empty list when no std help --find results are found (#13160)
# Description

Fixes #13143 by returning an empty list when there are no results found
by `std help --find/-f`

# User-Facing Changes

In addition, prints a message to stderr.

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-06-15 12:27:55 -05:00
b1cf0e258d Expand tables in help examples in std (#13146)
# Description

Some command help has example results with nested `table` data which is
displayed as the "non-expanded" form. E.g.:

```nu
╭───┬────────────────╮
│ 0 │ [list 2 items] │
│ 1 │ [list 2 items] │
╰───┴────────────────╯
```

For a good example, see `help zip`.

While we could simply remove the offending Example `result`'s from the
command itself, `std help` is capable of expanding the table properly.
It already formats the output of each example result using `table`, so
simply making it a `table -e` fixes the output.

While I wish we had a way of expanding the tables in the builtin `help`,
that seems to be the same type of problem as in formatting the `cal`
output (see #11954).

I personally think it's better to add this feature to `std help` than to
remove the offending example results, but as long as `std help` is
optional, only a small percentage of users are going to see the
"expected" results.

# User-Facing Changes

Excerpt from `std help zip` before change:

```nu
Zip two lists
> [1 2] | zip [3 4]
╭───┬────────────────╮
│ 0 │ [list 2 items] │
│ 1 │ [list 2 items] │
╰───┴────────────────╯
```

After:

```nu
Zip two lists
> [1 2] | zip [3 4]
╭───┬───────────╮
│ 0 │ ╭───┬───╮ │
│   │ │ 0 │ 1 │ │
│   │ │ 1 │ 3 │ │
│   │ ╰───┴───╯ │
│ 1 │ ╭───┬───╮ │
│   │ │ 0 │ 2 │ │
│   │ │ 1 │ 4 │ │
│   │ ╰───┴───╯ │
╰───┴───────────╯
```

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
- 
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-06-13 19:56:11 -05:00
af6e1cb5a6 Added search terms to if (#13145)
# Description

In this PR, I continue my tradition of trivial but hopefully helpful
`help` tweaks. As mentioned in #13143, I noticed that `help -f else`
oddly didn't return the `if` statement itself. Perhaps not so oddly,
since who the heck is going to go looking for *"else"* in the help?
Well, I did ...

Added *"else"* and *"conditional"* to the search terms for `if`.

I'll work on the meat of #13143 next - That's more substantiative.

# User-Facing Changes

Help only

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
- 
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-06-13 19:55:17 -05:00
323d5457f9 Improve performance of explore - 1 (#13116)
Could be improved further I guess; but not here;

You can test the speed differences using data from #13088

```nu
open data.db | get profiles | explore
```

address: #13062

________

1. Noticed that search does not work anymore (even on `main` branch).
2. Not sure about resolved merged conflicts, seems fine, but maybe
something was lost.

---------

Co-authored-by: Reilly Wood <reilly.wood@icloud.com>
2024-06-12 18:35:04 -07:00
634361b2d1 Make which-support feature non-optional (#13125)
# Description
Removes the `which-support` cargo feature and makes all of its
feature-gated code enabled by default in all builds. I'm not sure why
this one command is gated behind a feature. It seems to be a relic of
older code where we had features for what seems like every command.
2024-06-12 20:04:12 -05:00
bdbb096526 Fixed help for banner (#13138)
# Description

`help banner` had several issues:

* It used a Markdown link to an Asciinema recording, but Markdown links
aren't rendered as Markdown links by the help system (and can't be,
since (most?) terminals don't support that)
* Minor grammatical issues
* The Asciinema recording is out of date anyway. It still uses `use
stdn.nu banner` which isn't valid syntax any longer.

Since everyone at this point knows exactly what `banner` does 😉, I chose
to simply remove the link to the recording. Also tweaked the text
(initial caps and removed comma).

# User-Facing Changes

Help only

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
2024-06-12 14:26:58 -05:00
63c863c81b Bump actions-rust-lang/setup-rust-toolchain from 1.8.0 to 1.9.0 (#13132)
Bumps
[actions-rust-lang/setup-rust-toolchain](https://github.com/actions-rust-lang/setup-rust-toolchain)
from 1.8.0 to 1.9.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions-rust-lang/setup-rust-toolchain/releases">actions-rust-lang/setup-rust-toolchain's
releases</a>.</em></p>
<blockquote>
<h2>v1.9.0</h2>
<ul>
<li>Add extra argument <code>cache-on-failure</code> and forward it to
<code>Swatinem/rust-cache</code>. (<a
href="https://redirect.github.com/actions-rust-lang/setup-rust-toolchain/issues/39">#39</a>
by <a
href="https://github.com/samuelhnrq"><code>@​samuelhnrq</code></a>)<br
/>
Set the default the value to true.
This will result in more caching than previously.
This helps when large dependencies are compiled only for testing to
fail.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions-rust-lang/setup-rust-toolchain/blob/main/CHANGELOG.md">actions-rust-lang/setup-rust-toolchain's
changelog</a>.</em></p>
<blockquote>
<h2>[1.9.0] - 2024-06-08</h2>
<ul>
<li>Add extra argument <code>cache-on-failure</code> and forward it to
<code>Swatinem/rust-cache</code>. (<a
href="https://redirect.github.com/actions-rust-lang/setup-rust-toolchain/issues/39">#39</a>
by <a
href="https://github.com/samuelhnrq"><code>@​samuelhnrq</code></a>)<br
/>
Set the default the value to true.
This will result in more caching than previously.
This helps when large dependencies are compiled only for testing to
fail.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1fbea72663"><code>1fbea72</code></a>
Merge pull request <a
href="https://redirect.github.com/actions-rust-lang/setup-rust-toolchain/issues/40">#40</a>
from actions-rust-lang/prepare-release</li>
<li><a
href="46dca5d120"><code>46dca5d</code></a>
Add changelog entry</li>
<li><a
href="1734e14b0b"><code>1734e14</code></a>
Switch default of <code>cache-on-failure</code> to true</li>
<li><a
href="74e1b40e68"><code>74e1b40</code></a>
Merge pull request <a
href="https://redirect.github.com/actions-rust-lang/setup-rust-toolchain/issues/39">#39</a>
from samuelhnrq/main</li>
<li><a
href="d60b90debe"><code>d60b90d</code></a>
feat: adds cache-on-failure propagation</li>
<li>See full diff in <a
href="https://github.com/actions-rust-lang/setup-rust-toolchain/compare/v1.8.0...v1.9.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions-rust-lang/setup-rust-toolchain&package-manager=github_actions&previous-version=1.8.0&new-version=1.9.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-12 09:37:22 +08:00
17daf783b2 Bump crate-ci/typos from 1.22.1 to 1.22.4
Bumps [crate-ci/typos](https://github.com/crate-ci/typos) from 1.22.1 to 1.22.4.
- [Release notes](https://github.com/crate-ci/typos/releases)
- [Changelog](https://github.com/crate-ci/typos/blob/master/CHANGELOG.md)
- [Commits](https://github.com/crate-ci/typos/compare/v1.22.1...v1.22.4)

---
updated-dependencies:
- dependency-name: crate-ci/typos
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-12 00:50:01 +00:00
1e430d155e make packaging status 3 columns 2024-06-11 14:36:13 -05:00
0372e8c53c add $nu.data-dir for completions and $nu.cache-dir for other uses (#13122)
# Description

This PR is an attempt to add a standard location for people to put
completions in. I saw this topic come up again recently and IIRC we
decided to create a standard location. I used the dirs-next crate to
dictate where these locations are. I know some people won't like that
but at least this gets the ball rolling in a direction that has a
standard directory.

This is what the default NU_LIB_DIRS looks like now in the
default_env.nu. It should also be like this when starting nushell with
`nu -n`
```nushell
$env.NU_LIB_DIRS = [
    ($nu.default-config-dir | path join 'scripts') # add <nushell-config-dir>/scripts
    ($nu.data-dir | path join 'completions') # default home for nushell completions
]
```

I also added these default folders to the `$nu` variable so now there is
`$nu.data-path` and `$nu.cache-path`.

## Data Dir Default

![image](https://github.com/nushell/nushell/assets/343840/aeeb7cd6-17b4-43e8-bb6f-986a0c7fce23)

While I was in there, I also decided to add a cache dir

## Cache Dir Default

![image](https://github.com/nushell/nushell/assets/343840/87dead66-4911-4f67-bfb2-acb16f386674)

### This is what the default looks like in Ubuntu.

![image](https://github.com/nushell/nushell/assets/343840/bca8eae8-8c18-47e8-b64f-3efe34f0004f)

### This is what it looks like with XDG_CACHE_HOME and XDG_DATA_HOME
overridden
```nushell
XDG_DATA_HOME=/tmp/data_home XDG_CACHE_HOME=/tmp/cache_home cargo r
```

![image](https://github.com/nushell/nushell/assets/343840/fae86d50-9821-41f1-868e-3814eca3730b)

### This is what the defaults look like in Windows (username scrubbed to
protect the innocent)

![image](https://github.com/nushell/nushell/assets/343840/3ebdb5cd-0150-448c-aff5-c57053e4788a)

How my NU_LIB_DIRS is set in the images above
```nushell
$env.NU_LIB_DIRS = [
    ($nu.default-config-dir | path join 'scripts') # add <nushell-config-dir>/scripts
    '/Users/fdncred/src/nu_scripts'
    ($nu.config-path | path dirname)
    ($nu.data-dir | path join 'completions') # default home for nushell completions
]
```

Let the debate begin.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-06-11 15:10:31 -04:00
b0d1b4b182 Remove deprecated --not flag on str contains (#13124)
# Description
Removes the `str contains --not` flag that was deprecated in the last
minor release.

# User-Facing Changes
Breaking change since a flag was removed.
2024-06-11 15:00:00 -04:00
a8376fad40 update uutils crates (#13130)
# Description

This PR updates the uutils/coreutils crates to the latest released
version.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-06-11 13:44:13 -05:00
c09488f515 Fix multiple issues with def --wrapped help example (#13123)
# Description

I've noticed this several times but kept forgetting to fix it:

The example given for `help def` for the `--wrapped` flag is:

```nu
Define a custom wrapper for an external command
> def --wrapped my-echo [...rest] { echo $rest }; my-echo spam
  ╭───┬──────╮
  │ 0 │ spam │
  ╰───┴──────╯
```

That's ... odd, since (a) it specifically says *"for an external"*
command, and yet uses (and shows the output from) the builtin `echo`.
Also, (b) I believe `--wrapped` is *only* applicable to external
commands. Finally, (c) the `my-echo spam` doesn't even demonstrate a
wrapped argument.

Unless I'm truly missing something, the example just makes no sense.

This updates the example to really demonstrate `def --wrapped` with the
*external* version of `^echo`. It uses the `-e` command to interpret the
escape-tab character in the string.

```nu
> def --wrapped my-echo [...rest] { ^echo ...$rest }; my-echo -e 'spam\tspam'
spam  spam
```

# User-Facing Changes

Help example only.

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-06-10 19:12:54 -05:00
944d941dec path type error and not found changes (#13007)
# Description
Instead of an empty string, this PR changes `path type` to return null
if the path does not exist. If some other IO error is encountered, then
that error is bubbled up instead of treating it as a "not found" case.

# User-Facing Changes
- `path type` will now return null instead of an empty string, which is
technically a breaking change. In most cases though, I think this
shouldn't affect the behavior of scripts too much.
- `path type` can now error instead of returning an empty string if some
other IO error besides a "not found" error occurs.

Since this PR introduces breaking changes, it should be merged after the
0.94.1 patch.
2024-06-11 05:40:09 +08:00
a55a48529d Fix delta not being merged when evaluating menus (#13120)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

After parsing menu code, the changes weren't merged into the engine
state, which didn't produce any errors (somehow?) until the recent span
ID refactors. With this PR, menus get a new cloned engine state with the
parsed changes correctly merged in.

Hopefully fixes https://github.com/nushell/nushell/issues/13118

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-06-10 22:33:22 +03:00
af22bb8d52 Remove old sys command behavior (#13114)
# Description

Removes the old, deprecated behavior of the `sys` command. That is, it
will no longer return the full system information record.

# User-Facing Changes

Breaking change: `sys` no longer outputs anything and will instead
display help text.
2024-06-10 06:31:47 -05:00
5b7e8bf1d8 Deprecate --numbered from for (#13112)
# Description

#7777 removed the `--numbered` flag from `each`, `par-each`, `reduce`,
and `each while`. It was suggested at the time that it should be removed
from `for` as well, but for several reasons it wasn't.

This PR deprecates `--numbered` in anticipation of removing it in 0.96.

Note: Please review carefully, as this is my first "real" Rust/Nushell
code. I was hoping that some prior commit would be useful as a template,
but since this was an argument on a parser keyword, I didn't find too
much useful. So I had to actually find the relevant helpers in the code
and `nu_protocol` doc and learn how to use them - oh darn ;-) But please
make sure I did it correctly.

# User-Facing Changes

* Use of `--numbered` will result in a deprecation warning.
* Changed help example to demonstrate the new syntax.
* Help shows deprecation notice on the flag
2024-06-10 03:01:22 +00:00
021b8633cb Allow the addition of an index column to be optional (#13097)
Per discussion on discord dataframes channel with @maxim-uvarov and pyz.

When converting a dataframe to an nushell value via `polars into-nu`,
the index column should not be added by default and should only be added
when specifying `--index`
2024-06-10 10:45:25 +08:00
650ae537c3 Fix the use of right hand expressions in operations (#13096)
As reported by @maxim-uvarov and pyz in the dataframes discord channel:

```nushell
[[a b]; [1 1] [1 2] [2 1] [2 2] [3 1] [3 2]] | polars into-df | polars with-column ((polars col a) / (polars col b)) --name c


  × Type mismatch.
   ╭─[entry #45:1:102]
 1 │ [[a b]; [1 1] [1 2] [2 1] [2 2] [3 1] [3 2]] | polars into-df | polars with-column ((polars col a) / (polars col b)) --name c
   ·                                                                                                      ───────┬──────
   ·                                                                                                             ╰── Right hand side not a dataframe expression
   ╰────
```

This pull request corrects the type casting on the right hand side and
allows more than just polars literal expressions.
2024-06-10 10:44:04 +08:00
dc76183cd5 fix wrong casting with into filesize (#13110)
# Description
Fix wrong casting which is related to
https://github.com/nushell/nushell/pull/12974#discussion_r1618598336

# User-Facing Changes
AS-IS (before fixing)
```
$ "-10000PiB" | into filesize
6.2 EiB                                                         <--- Wrong casted value
$ "10000PiB" | into filesize 
-6.2 EiB                                                        <--- Wrong casted value
```

TO-BE (after fixing)
```
$ "-10000PiB" | into filesize
Error: nu:🐚:cant_convert

  × Can't convert to filesize.
   ╭─[entry #6:1:1]
 1 │ "-10000PiB" | into filesize
   · ─────┬─────
   ·      ╰── can't convert string to filesize
   ╰────

$ "10000PiB" | into filesize
Error: nu:🐚:cant_convert

  × Can't convert to filesize.
   ╭─[entry #7:1:1]
 1 │ "10000PiB" | into filesize
   · ─────┬────
   ·      ╰── can't convert string to filesize
   ╰────
```
2024-06-10 10:43:17 +08:00
5ac3ad61c4 Extend functionality of tango benchmark helpers (#13107)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

Refactors the tango helpers in the toolkit and makes them more flexible
(e.g., being able to benchmark any branch against any branch, not just
current and main).

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-06-09 14:03:52 +03:00
e52d7bc585 Span ID Refactor (Step 2): Use SpanId of expressions in some places (#13102)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

Part of https://github.com/nushell/nushell/issues/12963, step 2.

This PR refactors changes the use of `expression.span` to
`expression.span_id` via a new helper `Expression::span()`. A new
`GetSpan` is added to abstract getting the span from both `EngineState`
and `StateWorkingSet`.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

`format pattern` loses the ability to use variables in the pattern,
e.g., `... | format pattern 'value of {$it.name} is {$it.value}'`. This
is because the command did a custom parse-eval cycle, creating spans
that are not merged into the main engine state. We could clone the
engine state, add Clone trait to StateDelta and merge the cloned delta
to the cloned state, but IMO there is not much value from having this
ability, since we have string interpolation nowadays: `... | $"value of
($in.name) is ($in.value)"`.

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-06-09 12:15:53 +03:00
48a34ffe6d Fix test failure when running tests with nextest (#13101)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

Test was failing with “did you mean” due to the `NEXTEST` env var being
present when running tests via `cargo nextest run`.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-06-08 15:11:49 +03:00
d2121a155e Fixes #13093 - Erroneous example in 'touch' help (#13095)
# Description

Fixes #13093 by:

* Removing the mentioned help example
* Updating the `--accessed` and `--modified` flag descriptions to remove
mention of "timestamp/date"

# User-Facing Changes

Help changes

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
2024-06-07 09:33:48 -05:00
cfe13397ed Fix the colors when completing using a relative path (#12898)
# Description
Fixes a bug where the autocompletion menu has the wrong colors due to
'std::fs::symlink_metadata' not being able to handle relative paths.
Attached below are screenshots before and after applying the commit, in
which the colors are wrong when autocompleting on a path prefixed by a
tilde, whereas the same directory is highlighted correctly when prefixed
by a dot.

BEFORE:

![1715982514020](https://github.com/nushell/nushell/assets/89810988/62051f03-0846-430d-b493-e6f3cd6d0e04)

![1715982873231](https://github.com/nushell/nushell/assets/89810988/28c647ab-3b2a-47ef-9967-5d09927e299d)
AFTER:

![1715982490585](https://github.com/nushell/nushell/assets/89810988/7a370138-50af-42fd-9724-a34cc605bede)

![1715982894748](https://github.com/nushell/nushell/assets/89810988/e884f69f-f757-426e-98c4-bc9f7f6fc561)
2024-06-07 08:07:23 -05:00
d6a9fb0e40 Fix display formatting for command type in help commands (#12996)
# Description
Related to #12832, this PR changes the way `help commands` displays the
command type to be consistent with `scope commands` and `which`.

# User-Facing Changes
Technically a breaking change since the `help commands` output can now
be different.
2024-06-07 08:03:31 -05:00
a246a19387 fix: coredump without any messages (#13034)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.

- this PR should close https://github.com/nushell/nushell/issues/12874
- fixes https://github.com/nushell/nushell/issues/12874
I want to fix the issue which is induced by the fix for
https://github.com/nushell/nushell/issues/12369. after this pr. This pr
induced a new error for unix system, in order to show coredump messages

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

- this PR should close https://github.com/nushell/nushell/issues/12874
- fixes https://github.com/nushell/nushell/issues/12874
I want to fix the issue which is induced by the fix for
https://github.com/nushell/nushell/issues/12369. after this pr. This pr
induced a new error for unix system, in order to show coredump messages

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

after fix for 12874, coredump message is messing, so I want to fix it

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->


![image](https://github.com/nushell/nushell/assets/60290287/6d8ab756-3031-4212-a5f5-5f71be3857f9)

---------

Co-authored-by: Darren Schroeder <343840+fdncred@users.noreply.github.com>
2024-06-07 08:02:52 -05:00
2d0a60ac67 Use native toml datetime type in to toml (#13018)
# Description

Makes `to toml` use the `toml::value::Datetime` type, so that `to toml`
serializes dates properly.


# User-Facing Changes

`to toml` will now encode dates differently, in a native format instead
of a string. This could, in theory, break some workflows:

```Nushell
# Before:
~> {datetime: 2024-05-31} | to toml | from toml | get datetime | into datetime
Fri, 31 May 2024 00:00:00 +0000 (10 hours ago)
# After:
~> {datetime: 2024-05-31} | to toml | from toml | get datetime | into datetime
Error: nu:🐚:only_supports_this_input_type

  × Input type not supported.
   ╭─[entry #13:1:36]
 1 │ {datetime: 2024-05-31} | to toml | from toml | get datetime | into datetime
   ·                                    ────┬────                  ──────┬──────
   ·                                        │                            ╰── only string and int input data is supported
   ·                                        ╰── input type: date
   ╰────
```

Fix #11751
2024-06-07 07:43:30 -05:00
83cf212e20 run_external.rs: use pathdiff::diff_path to handle relative path (#13056)
# Description
This pr is going to use `pathdiff::diff_path`, so we don't need to
handle for relative path by ourselves.

This is also the behavior before the rewritten of run_external.rs

It's a follow up to https://github.com/nushell/nushell/pull/13028

# User-Facing Changes
NaN

# Tests + Formatting
No need to add tests
2024-06-07 10:14:42 +08:00
e3a20e90b0 Bump os_pipe from 1.1.5 to 1.2.0 (#13087)
Bumps [os_pipe](https://github.com/oconnor663/os_pipe.rs) from 1.1.5 to
1.2.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="6268f86399"><code>6268f86</code></a>
bump version to 1.2.0</li>
<li><a
href="b392c7e3c9"><code>b392c7e</code></a>
remove unsafe code from dup()</li>
<li><a
href="c7b2277bdb"><code>c7b2277</code></a>
move <code>mod sys</code> to the top of lib.rs</li>
<li><a
href="282b1884f6"><code>282b188</code></a>
update ci.yml</li>
<li><a
href="432da0803a"><code>432da08</code></a>
add rust-version to Cargo.toml</li>
<li><a
href="12868e41e6"><code>12868e4</code></a>
edition 2018 -&gt; 2021</li>
<li><a
href="d9e8d61593"><code>d9e8d61</code></a>
activate IO safety integration by default</li>
<li><a
href="d02b96eddb"><code>d02b96e</code></a>
added visionos</li>
<li>See full diff in <a
href="https://github.com/oconnor663/os_pipe.rs/compare/1.1.5...1.2.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=os_pipe&package-manager=cargo&previous-version=1.1.5&new-version=1.2.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-07 08:08:02 +08:00
83628f04ff Bump crate-ci/typos from 1.22.0 to 1.22.1
Bumps [crate-ci/typos](https://github.com/crate-ci/typos) from 1.22.0 to 1.22.1.
- [Release notes](https://github.com/crate-ci/typos/releases)
- [Changelog](https://github.com/crate-ci/typos/blob/master/CHANGELOG.md)
- [Commits](https://github.com/crate-ci/typos/compare/v1.22.0...v1.22.1)

---
updated-dependencies:
- dependency-name: crate-ci/typos
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-06 23:48:44 +00:00
460df0e99a Group polars crate updates in dependabot (#13084)
We depend on `polars` and some of its subcrates explicitly. This leads
to the unfortunate situtation that we have dependabot opening PRs that
are not compiling, independent of whether there are breaking changes we
need to account for. By introducing a [dependabot
group](https://docs.github.com/en/code-security/dependabot/dependabot-version-updates/customizing-dependency-updates#grouping-dependabot-updates-into-one-pull-request)
for `polars` we can at least limit that failure mode.
2024-06-06 18:33:21 -05:00
eef4a89ff4 Move format date to Category::Strings (#13083)
The rest of the `format` commands live there.

Closes https://github.com/nushell/nushell.github.io/issues/1435
2024-06-06 18:32:08 -05:00
073d8850e9 Allow stor insert and stor update to accept pipeline input (#12882)
- this PR should close #11433 

# Description
This PR implements pipeline input support for the stor insert and stor
update commands,
enabling users to directly pass data to these commands without relying
solely on flag parameters.

Previously, it was only possible to specify the record data using flag
parameters,
which could be less intuitive and become cumbersome:

```bash
stor insert --table-name nudb --data-record {bool1: true, int1: 5, float1: 1.1, str1: fdncred, datetime1: 2023-04-17}
stor update --table-name nudb --update-record {str1: nushell datetime1: 2020-04-17}
```

Now it is also possible to pass a record through pipeline input:

```bash
{bool1: true, int1: 5, float1: 1.1, str1: fdncred, datetime1: 2023-04-17} | stor insert --table-name nudb
{str1: nushell datetime1: 2020-04-17} | stor update --table-name nudb"
```

Changes made on code:

- Modified stor insert and stor update to accept a record from the
pipeline.
- Added logic to handle data from the pipeline record.
- Implemented an error case to prevent simultaneous data input from both
pipeline and flag.

# User-facing changes

Returns an error when both ways of inserting data are used.

The examples for both commands were updated and in each command, when
the -d or -u fags are being used at the same time as input is being
passed through the pipeline, it returns an error:


![image](https://github.com/nushell/nushell/assets/120738170/c5b15c1b-716a-4df4-95e8-3bca8f7ae224)

Also returns an error when both of them are missing:


![image](https://github.com/nushell/nushell/assets/120738170/47f538ab-79f1-4fcc-9c62-d7a7d60f86a1)


# Tests + Formating
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

Co-authored-by: Rodrigo Friães <rodrigo.friaes@tecnico.ulisboa.pt>
2024-06-06 10:30:06 -05:00
f2f4b83886 Overhaul explore config (#13075)
Configuration in `explore` has always been confusing to me. This PR
overhauls (and simplifies, I think) how configuration is done.

# Details

1. Configuration is now strongly typed. In `Explore::run()` we create an
`ExploreConfig` struct from the more general Nu configuration and
arguments to `explore`, then pass that struct to other parts of
`explore` that need configuration. IMO this is a lot easier to reason
about and trace than the previous approach of creating a
`HashMap<String, Value>` and then using that to make various structs
elsewhere.
2. We now inherit more configuration from the config used for regular Nu
tables
1. Border/line styling now uses the `separator` style used for regular
Nu tables, the special `explore.split_line` config point has been
retired.
2. Cell padding in tables is now controlled by `table.padding` instead
of the undocumented `column_padding_left`/`column_padding_right` config
3. The (optional, previously not enabled by default) `selected_row` and
`selected_column` configuration has been removed. We now only highlight
the selected cell. I could re-add this if people really like the feature
but I'm guessing nobody uses it.

The interface still looks the same with a default/empty config.nu:


![image](https://github.com/nushell/nushell/assets/26268125/e40161ba-a8ec-407a-932d-5ece6f4dc616)
2024-06-06 08:46:43 -05:00
5d163c1bcc run_external: remove inner quotes once nushell gets = sign (#13073)
# Description
Fixes: #13066

nushell should remove argument values' inner quote once it gets `=`.
Whatever it's a flag or not, and it also replace from `\"` to `"` before
passing it to external commands.

# User-Facing Changes
Given the shell script:
```shell
# test.sh
echo $@
```
## Before
```
>  sh test.sh -ldflags="-s -w" github.com
-ldflags="-s -w" github.com
> sh test.sh exp='-s -w' github.com
exp='-s -w' github.com
```
## After
```
>  sh test.sh -ldflags="-s -w" github.com
-ldflags=-s -w github.com
> sh test.sh exp='-s -w' github.com
exp=-s -w github.com
```

# Tests + Formatting
Added some tests
2024-06-06 11:03:34 +08:00
75d5807dcd Fix explore panic on empty lists (#13074)
This fixes up a panic I accidentally introduced when refactoring the
cursor code in `explore`: https://github.com/nushell/nushell/pull/12979

Under certain circumstances (running `:nu []`, opening `:try` with the
hidden `try.reactive` setting enabled), `explore` would panic when
handling an empty list. To fix this for now I've removed the validation
I added to the Cursor constructor in that PR.
2024-06-05 19:49:32 -07:00
f378c72f6f Create CITATION.cff (#12983)
# Description
This PR creates a `CITATION.cff` file. 

# User-Facing Changes
Users will now be able to properly cite `nushell` when used e.g. in
scientific work. The `CITATION.cff` file also enables users to get the
citation withe the Github integration, see
[documentation](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-citation-files).
More information about the `.cff` standard
[here](https://citation-file-format.github.io/).

# After Submitting
Do you want it documented in the docs somewhere? If so, where exactly?
Happy to make a PR there.

One feature that could well be implemented later on is automatic updates
to the CITATION file with a Github Action (this is outside of my comfort
zone, so I'll leave that for a separate PR should anyone want to
implement it). Ideally, the `CITATION.cff` file points to the latest
release , and could do so with the following fields:
```
commit: 
version: 
date-released: 
```

---------

Co-authored-by: Mikkel Roald-Arbøl <github.ggb9a@simplelogin.co>
Co-authored-by: Darren Schroeder <343840+fdncred@users.noreply.github.com>
Co-authored-by: NotTheDr01ds <32344964+NotTheDr01ds@users.noreply.github.com>
2024-06-06 08:53:34 +08:00
a6b1d1f6d9 Upgrade to polars 0.40 (#13069)
Upgrading to polars 0.40
2024-06-06 07:26:47 +08:00
96493b26d9 Make string related commands parse-time evaluatable (#13032)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

Related meta-issue: #10239.

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

This PR will modify some `str`-related commands so that they can be
evaluated at parse time.

See the following list for those implemented by this pr.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

Available now:
- `str` subcommands
  - `trim`
  - `contains`
  - `distance`
  - `ends-with`
  - `expand`
  - `index-of`
  - `join`
  - `replace`
  - `reverse`
  - `starts-with`
  - `stats`
  - `substring`
  - `capitalize`
  - `downcase`
  - `upcase`
- `split` subcommands
  - `chars`
  - `column`
  - `list`
  - `row`
  - `words`
- `format` subcommands
  - `date`
  - `duration`
  - `filesize`
- string related commands
  - `parse`
  - `detect columns`
  - `encode` & `decode`

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

Unresolved questions:
- [ ] Is there any routine of testing const expressions? I haven't found
any yet.
- [ ] Is const expressions required to behave just like there non-const
version, like what rust promises?

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->

Unresolved questions:
- [ ] Do const commands need special marks in the docs?
2024-06-05 22:21:52 +03:00
36ad7f15c4 cd/def --env examples (#13068)
# Description

Per a Discord question
(https://discord.com/channels/601130461678272522/1244293194603167845/1247794228696711198),
this adds examples to the `help` for both:

* `cd`
* `def`

to demonstrate that `def --env` is required when changing directories in
a custom command.

Since the existing examples for `def` were a bit more complex (and had
output) but the `cd` ones were more simplified, I did use slightly
different examples in each. Either or both could be tweaked if desired.

# User-Facing Changes

Command `help` examples

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting

N/A

---------

Co-authored-by: Jakub Žádník <kubouch@gmail.com>
2024-06-05 21:35:31 +03:00
78fdb2f4d1 reduce log tracing in nu-cli (#13067)
This one trace message creates thousands of lines of trace messages that
is probably
better suited to using on an *as needed* basis rather than everyone
having to wade
through it...

For now, I just commented it out but eventually this line of code should
be removed
and used simply for the time when someone needs to see it...
2024-06-05 08:54:38 -07:00
a3bc85bb5f Add options for filtering the log output from nu (#13044)
# Description

Add `--log-include` and `--log-exclude` options to filter the log output
from `nu` to specific module prefixes. For example,

```nushell
nu --log-level trace --log-exclude '[nu_parser]'
```

This avoids having to scan through parser spam when trying to debug
something else at `TRACE` level, and should make it feel much more
reasonable to add logging, particularly at `TRACE` level, to various
places in the codebase. It can also be used to debug non-Nushell crates
that support the Rust logging infrastructure, as many do.

You can also include a specific module instead of excluding the parser
log output:

```nushell
nu --log-level trace --log-include '[nu_plugin]'
```

Pinging #13041 for reference, but hesitant to outright say that this
closes that. I think it address that concern though. I've also struggled
with debugging plugin stuff with all of the other output, so this will
really help me there.

# User-Facing Changes

- New `nu` option: `--log-include`
- New `nu` option: `--log-exclude`
2024-06-05 16:42:55 +08:00
a9c2349ada Refactor explore cursor code (#12979)
`explore` has 3 cursor-related structs that are extensively used to
track the currently shown "window" of the data being shown. I was
finding the cursor code quite difficult to follow, so this PR:
- rewrites the base `Cursor` struct from scratch, with some tests
- makes big changes to `WindowCursor`
- renames `XYCursor` to `WindowCursor2D`
- makes some of the cursor functions fallible as a start towards better
error handling
- changes lots of function names to things that I find more intuitive
- adds comments, including ASCII diagrams to explain how the cursors
work

More work could be done (I'd like to review/change more function names
in `WindowCursor` and `WindowCursor2D` and add more tests), but this is
the limit of what I can get done in a weekend. I think this part of the
code is in a better place now.

# Testing performed

I did a lot of manual testing in the record view and binary viewer,
moving around with arrow keys / page up+down / home+end.

This can definitely wait until after the release freeze, this area has
very few automated tests and it'd be good to let the changes bake a bit.
2024-06-04 19:50:11 -07:00
e4104d0792 Span ID Refactor - Step 1 (#12960)
# Description
First part of SpanID refactoring series. This PR adds a `SpanId` type
and a corresponding `span_id` field to `Expression`. Parser creating
expressions will now add them to an array in `StateWorkingSet`,
generates a corresponding ID and saves the ID to the Expression. The IDs
are not used anywhere yet.

For the rough overall plan, see
https://github.com/nushell/nushell/issues/12963.

# User-Facing Changes
Hopefully none. This is only a refactor of Nushell's internals that
shouldn't have visible side effects.

# Tests + Formatting

# After Submitting
2024-06-05 09:57:14 +08:00
b10325dff1 Allow int values to be converted into floats. (#13025)
Addresses the bug found by @maxim-uvarov when trying to coerce an int
Value to a polars float:

<img width="863" alt="image"
src="https://github.com/nushell/nushell/assets/56345/4d858812-a7b3-4296-98f4-dce0c544b4c6">

Conversion now works correctly:

<img width="891" alt="Screenshot 2024-05-31 at 14 28 51"
src="https://github.com/nushell/nushell/assets/56345/78d9f711-7ad5-4503-abc6-7aba64a2e675">
2024-06-04 18:51:11 -07:00
d4fa014534 Make query xml return nodes in document order (#13047)
# Description

`query xml` used to return results from an XPath query in a random,
non-deterministic order. With this change, results get returned in the
order they appear in the document.

# User-Facing Changes
`query xml` will now return results in a non-random order.

# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting
2024-06-05 09:47:36 +08:00
c310175a1e Bump hustcer/setup-nu from 3.10 to 3.11 (#13058) 2024-06-05 09:17:27 +08:00
6b2012bdfa Bump crate-ci/typos from 1.21.0 to 1.22.0
Bumps [crate-ci/typos](https://github.com/crate-ci/typos) from 1.21.0 to 1.22.0.
- [Release notes](https://github.com/crate-ci/typos/releases)
- [Changelog](https://github.com/crate-ci/typos/blob/master/CHANGELOG.md)
- [Commits](https://github.com/crate-ci/typos/compare/v1.21.0...v1.22.0)

---
updated-dependencies:
- dependency-name: crate-ci/typos
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-05 01:01:54 +00:00
28e33587d9 msgpackz: increase default compression level (#13035)
# Description
Increase default compression level for brotli on msgpackz to 3. This has
the best compression time generally. Level 0 and 1 give weird results
and sometimes cause extremely inflated outputs rather than being
compressed. So far this hasn't really been a problem for the plugin
registry file, but has been for other data.

The `$example` is the web-app example from https://json.org/example.html

Benchmarked with:

```nushell
seq 0 11 | each { |level|
  let compressed = ($example | to msgpackz --quality $level)
  let time = (timeit { $example | to msgpackz --quality $level })
  {
    level: $level
    time: $time
    length: ($compressed | bytes length)
    ratio: (($uncompressed_length | into float) / ($compressed | bytes length))
  }
}
```

```
╭────┬───────┬─────────────────┬────────┬───────╮
│  # │ level │      time       │ length │ ratio │
├────┼───────┼─────────────────┼────────┼───────┤
│  0 │     0 │ 4ms 611µs 875ns │   3333 │  0.72 │
│  1 │     1 │ 1ms 334µs 500ns │   3333 │  0.72 │
│  2 │     2 │     190µs 333ns │   1185 │  2.02 │
│  3 │     3 │      184µs 42ns │   1128 │  2.12 │
│  4 │     4 │      245µs 83ns │   1098 │  2.18 │
│  5 │     5 │     265µs 584ns │   1040 │  2.30 │
│  6 │     6 │     270µs 792ns │   1040 │  2.30 │
│  7 │     7 │     444µs 708ns │   1040 │  2.30 │
│  8 │     8 │       1ms 801µs │   1040 │  2.30 │
│  9 │     9 │     843µs 875ns │   1037 │  2.31 │
│ 10 │    10 │ 4ms 128µs 375ns │    984 │  2.43 │
│ 11 │    11 │ 6ms 352µs 834ns │    986 │  2.43 │
╰────┴───────┴─────────────────┴────────┴───────╯
```

cc @maxim-uvarov
2024-06-04 17:19:10 -07:00
ff5cb6f1ff complete the type of --error-label in std assert commands (#12998)
i was looking at the website documentation of `std assert` and i noticed
one thing
- the `--error-label` argument of `assert` and `assert not` was just a
`record` -> now it's that complete type `record<text: string, span:
record<start: int, end: int>>`
2024-06-05 08:02:17 +08:00
65911c125c Try to preserve the ordering of elements in from toml (#13045)
# Description

Enable the `preserve_order` feature of the `toml` crate to preserve the
ordering of elements when converting from/to toml.

Additionally, use `to_string_pretty()` instead of `to_string()` in `to
toml`. This displays arrays on multiple lines instead of one big single
line. I'm not sure if this one is a good idea or not... Happy to remove
this from this PR if it's not.

# User-Facing Changes
The order of elements will be different when using `from toml`. The
formatting of arrays will also be different when using `to toml`. For
example:

- before
```
❯ "foo=1\nbar=2\ndoo=3" | from toml
╭─────┬───╮
│ bar │ 2 │
│ doo │ 3 │
│ foo │ 1 │
╰─────┴───╯
❯ {a: [a b c d]} | to toml
a = ["a", "b", "c", "d"]
```
- after
```
❯ "foo=1\nbar=2\ndoo=3" | from toml
╭─────┬───╮
│ foo │ 1 │
│ bar │ 2 │
│ doo │ 3 │
╰─────┴───╯
❯ {a: [a b c d]} | to toml
a = [
    "a",
    "b",
    "c",
    "d",
]
```

# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🔴 `toolkit test`
-  `toolkit test stdlib`

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-06-05 08:00:39 +08:00
3f0db11ae5 support plus sign for "into filesize" (#12974)
# Description
Fixes https://github.com/nushell/nushell/issues/12968. After apply this
patch, we can use explict plus sign character included string with `into
filesize` cmd.

# User-Facing Changes
AS-IS (before fixing)
```
$ "+8 KiB" | into filesize                                                                                                         
Error: nu:🐚:cant_convert

  × Can't convert to int.
   ╭─[entry #31:1:1]
 1 │ "+8 KiB" | into filesize
   · ────┬───
   ·     ╰── can't convert string to int
   ╰────
```

TO-BE (after fixing)
```
$ "+8KiB" | into filesize                                                                                       
8.0 KiB
```

# Tests + Formatting
Added a test 

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-06-05 07:43:50 +08:00
7746e84199 Make LS_COLORS functionality faster in explore, especially on Windows (#12984)
Closes #12980. More context there, but basically `explore` was getting
file metadata for every row every time the record view was rendered. The
quick fix for now is to do the `LS_COLORS` colouring with a `&str`
instead of a path and file metadata.
2024-06-05 07:43:12 +08:00
a84fdb1d37 Fixed a couple of incorrect errors messages (#13043)
Fixed a couple of error message that incorrectly reported as parquet
errors instead of CSV errors.
2024-06-05 07:40:02 +08:00
ad5a6cdc00 bump version to 0.94.3 (#13055) 2024-06-05 06:52:40 +08:00
be8c1dc006 Fix run_external::expand_glob() to return paths that are PWD-relative but reflect the original intent (#13028)
# Description

Fix #13021

This changes the `expand_glob()` function to use
`nu_engine::glob_from()` so that absolute paths are actually preserved,
rather than being made relative to the provided parent. This preserves
the intent of whoever wrote the original path/glob, and also makes it so
that tilde always produces absolute paths.

I also made `expand_glob()` handle Ctrl-C so that it can be interrupted.

cc @YizhePKU

# Tests + Formatting
No additional tests here... but that might be a good idea.
2024-06-03 10:38:55 +03:00
b50903cf58 Fix external command name parsing with backslashes, and add tests (#13027)
# Description

Fixes #13016 and adds tests for many variations of external call
parsing.

I just realized @kubouch took a crack at this too (#13022) so really
whichever is better, but I think the
tests are a good addition.
2024-06-03 10:28:45 +03:00
6635b74d9d Bump version to 0.94.2 (#13014)
Version bump after 0.94.1 patch release.
2024-06-03 10:28:35 +03:00
f3cf693ec7 Disallow more characters in arguments for internal cmd commands (#13009)
# Description
Makes `run-external` error if arguments to `cmd.exe` internal commands
contain newlines or a percent sign. This is because the percent sign can
expand environment variables, potentially? allowing command injection.
Newlines I think will truncate the rest of the arguments and should
probably be disallowed to be safe.

# After Submitting
- If the user calls `cmd.exe` directly, then this bypasses our
handling/checking for internal `cmd` commands. Instead, we use the
handling from the Rust std lib which, in this case, does not do special
handling and is potentially unsafe. Then again, it could be the user's
specific intention to run `cmd` with whatever trusted input. The problem
is that since we use the std lib handling, it assumes the exe uses the C
runtime escaping rules and will perform some unwanted escaping. E.g., it
will add backslashes to the quotes in `cmd echo /c '""'`.
- If `cmd` is called indirectly via a `.bat` or `.cmd` file, then we use
the Rust std lib which has separate handling for bat files that should
be safe, but will reject some inputs.
- ~~I'm not sure how we handle `PATHEXT`, that can also cause a file
without an extension to be run as a bat file. If so, I don't know where
the handling, if any, is done for that.~~ It looks like we use the
`which` crate to do the lookup using `PATHEXT`. Then, we pass the exe
path from that to the Rust std lib `Command`, which should be safe
(except for the first `cmd.exe` note).

So, in the future we need to unify and/or fix these different
implementations, including our own special handling for internal `cmd`
commands that this PR tries to fix.
2024-05-30 19:24:48 +00:00
31f3d2f664 Restore path type behavior (#13006)
# Description
Restores `path type` to return an empty string on error like it did pre
0.94.0.
2024-05-30 13:42:22 +00:00
40772fea15 fix do closure with both required, options, and rest args (#13002)
# Description
Fixes: #12985

`val_iter` has already handle required positional and optional
positional arguments, it not skip them again while handling rest
arguments.

# User-Facing Changes
Makes `do {|a, ...b| echo $a ...$b} 1 2 3 4` output the following again:
```nushell
╭───┬───╮
│ 0 │ 1 │
│ 1 │ 2 │
│ 2 │ 3 │
│ 3 │ 4 │
╰───┴───╯
```

# Tests + Formatting
Added some test cases
2024-05-30 08:29:46 -05:00
0e1553026e Restore tilde expansion on external command names (#13001)
# Description

Fix a regression introduced by #12921, where tilde expansion was no
longer done on the external command name, breaking things like

```nushell
> ~/.cargo/bin/exa
```

This properly handles quoted strings, so they don't expand:

```nushell
> ^"~/.cargo/bin/exa"
Error: nu:🐚:external_command

  × External command failed
   ╭─[entry #1:1:2]
 1 │ ^"~/.cargo/bin/exa"
   ·  ─────────┬────────
   ·           ╰── Command `~/.cargo/bin/exa` not found
   ╰────
  help: `~/.cargo/bin/exa` is neither a Nushell built-in or a known external command

```

This required a change to the parser, so the command name is also parsed
in the same way the arguments are - i.e. the quotes on the outside
remain in the expression. Hopefully that doesn't break anything else. 🤞

Fixes #13000. Should include in patch release 0.94.1

cc @YizhePKU

# User-Facing Changes
- Tilde expansion now works again for external commands
- The `command` of `run-external` will now have its quotes removed like
the other arguments if it is a literal string
- The parser is changed to include quotes in the command expression of
`ExternalCall` if they were present

# Tests + Formatting
I would like to add a regression test for this, but it's complicated
because we need a well-known binary within the home directory, which
just isn't a thing. We could drop one there, but that's kind of a bad
behavior for a test to do. I also considered changing the home directory
for the test, but that's so platform-specific - potentially could get it
working on specific platforms though. Changing `HOME` env on Linux
definitely works as far as tilde expansion works.

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
2024-05-29 18:48:29 -07:00
6a2996251c fixes a bug in OSC9;9 execution (#12994)
# Description

This fixes a bug in the `OSC 9;9` functionality where the path wasn't
being constructed properly and therefore wasn't getting set right for
things like "Duplicate Tab" in Windows Terminal. Thanks to @Araxeus for
finding it.

Related to https://github.com/nushell/nushell/issues/10166

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-05-29 18:06:47 -05:00
f3991f2080 Bump version to 0.94.1 (#12988)
Merge this PR before merging any other PRs.
2024-05-28 22:41:23 +00:00
61182deb96 Bump version to 0.94.0 (#12987) 2024-05-28 12:04:09 -07:00
4ab2c3238a Disable reedline patch for 0.94.0 (#12986)
Disable crates.io git patch for reedline for 0.94.0 release.
2024-05-28 18:53:51 +00:00
6012af2412 Fix panic when redirecting nothing (#12970)
# Description
Fixes #12969 where the parser can panic if a redirection is applied to
nothing / an empty command.

# Tests + Formatting
Added a test.
2024-05-27 10:03:06 +08:00
f74dd33ba9 Fix touch --reference using PWD from the environment (#12976)
This PR fixes `touch --reference path` so that it resolves `path` using
PWD from the engine state.
2024-05-26 20:24:00 +03:00
a1fc41db22 Fix path type using PWD from the environment (#12975)
This PR fixes the `path type` command so that it resolves relative paths
using PWD from the engine state.

As a bonus, it also fixes the issue of `path type` returning an empty
string instead of an error when it fails.
2024-05-26 20:23:52 +03:00
f38f88d42c Fixes . expanded incorrectly as external argument (#12950)
This PR fixes a bug where `.` is expanded into an empty string when used
as an argument to external commands. Fixes
https://github.com/nushell/nushell/issues/12948.

---------

Co-authored-by: Ian Manske <ian.manske@pm.me>
2024-05-26 07:06:17 +08:00
0c5a67f4e5 make polars plugin use mimalloc (#12967)
# Description
@maxim-uvarov did a ton of research and work with the dply-rs author and
ritchie from polars and found out that the allocator matters on macos
and it seems to be what was messing up the performance of polars plugin.
ritchie suggested to use jemalloc but i switched it to mimalloc to match
nushell and it seems to run better.

## Before (default allocator)
note - using 1..10 vs 1..100 since it takes so long. also notice how
high the `max` timings are compared to mimalloc below.
```nushell
❯ 1..10 | each {timeit {polars open Data7602DescendingYearOrder.csv | polars group-by year | polars agg (polars col geo_count | polars sum) | polars collect | null}} |   | {mean: ($in | math avg), min: ($in | math min), max: ($in | math max), stddev: ($in | into int | into float | math stddev | into int | $'($in)ns' | into duration)}
╭────────┬─────────────────────────╮
│ mean   │ 4sec 999ms 605µs 995ns  │
│ min    │ 983ms 627µs 42ns        │
│ max    │ 13sec 398ms 135µs 791ns │
│ stddev │ 3sec 476ms 479µs 939ns  │
╰────────┴─────────────────────────╯
❯ use std bench
❯ bench { polars open Data7602DescendingYearOrder.csv | polars group-by year | polars agg (polars col geo_count | polars sum) | polars collect | null } -n 10
╭───────┬────────────────────────╮
│ mean  │ 6sec 220ms 783µs 983ns │
│ min   │ 1sec 184ms 997µs 708ns │
│ max   │ 18sec 882ms 81µs 708ns │
│ std   │ 5sec 350ms 375µs 697ns │
│ times │ [list 10 items]        │
╰───────┴────────────────────────╯
```

## After (using mimalloc)
```nushell
❯ 1..100 | each {timeit {polars open Data7602DescendingYearOrder.csv | polars group-by year | polars agg (polars col geo_count | polars sum) | polars collect | null}} |   | {mean: ($in | math avg), min: ($in | math min), max: ($in | math max), stddev: ($in | into int | into float | math stddev | into int | $'($in)ns' | into duration)}
╭────────┬───────────────────╮
│ mean   │ 103ms 728µs 902ns │
│ min    │ 97ms 107µs 42ns   │
│ max    │ 149ms 430µs 84ns  │
│ stddev │ 5ms 690µs 664ns   │
╰────────┴───────────────────╯
❯ use std bench
❯ bench { polars open Data7602DescendingYearOrder.csv | polars group-by year | polars agg (polars col geo_count | polars sum) | polars collect | null } -n 100
╭───────┬───────────────────╮
│ mean  │ 103ms 620µs 195ns │
│ min   │ 97ms 541µs 166ns  │
│ max   │ 130ms 262µs 166ns │
│ std   │ 4ms 948µs 654ns   │
│ times │ [list 100 items]  │
╰───────┴───────────────────╯
```

## After (using jemalloc - just for comparison)
```nushell
❯ 1..100 | each {timeit {polars open Data7602DescendingYearOrder.csv | polars group-by year | polars agg (polars col geo_count | polars sum) | polars collect | null}} |   | {mean: ($in | math avg), min: ($in | math min), max: ($in | math max), stddev: ($in | into int | into float | math stddev | into int | $'($in)ns' | into duration)}

╭────────┬───────────────────╮
│ mean   │ 113ms 939µs 777ns │
│ min    │ 108ms 337µs 333ns │
│ max    │ 166ms 467µs 458ns │
│ stddev │ 6ms 175µs 618ns   │
╰────────┴───────────────────╯
❯ use std bench
❯ bench { polars open Data7602DescendingYearOrder.csv | polars group-by year | polars agg (polars col geo_count | polars sum) | polars collect | null } -n 100
╭───────┬───────────────────╮
│ mean  │ 114ms 363µs 530ns │
│ min   │ 108ms 804µs 833ns │
│ max   │ 143ms 521µs 459ns │
│ std   │ 5ms 88µs 56ns     │
│ times │ [list 100 items]  │
╰───────┴───────────────────╯
```

## After (using parquet + mimalloc)
```nushell
❯ 1..100 | each {timeit {polars open data.parquet | polars group-by year | polars agg (polars col geo_count | polars sum) | polars collect | null}} |   | {mean: ($in | math avg), min: ($in | math min), max: ($in | math max), stddev: ($in | into int | into float | math stddev | into int | $'($in)ns' | into duration)}
╭────────┬──────────────────╮
│ mean   │ 34ms 255µs 492ns │
│ min    │ 31ms 787µs 250ns │
│ max    │ 76ms 408µs 416ns │
│ stddev │ 4ms 472µs 916ns  │
╰────────┴──────────────────╯
❯ use std bench
❯ bench { polars open data.parquet | polars group-by year | polars agg (polars col geo_count | polars sum) | polars collect | null } -n 100
╭───────┬──────────────────╮
│ mean  │ 34ms 897µs 562ns │
│ min   │ 31ms 518µs 542ns │
│ max   │ 65ms 943µs 625ns │
│ std   │ 3ms 450µs 741ns  │
│ times │ [list 100 items] │
╰───────┴──────────────────╯
```

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-05-25 09:10:01 -05:00
95977faf2d Do not propagate glob creation error for external args (#12955)
# Description
Instead of returning an error, this PR changes `expand_glob` in
`run_external.rs` to return the original string arg if glob creation
failed. This makes it so that, e.g.,
```nushell
^echo `[`
^echo `***`
```
no longer fail with a shell error. (This follows from #12921.)
2024-05-25 08:59:36 +08:00
c5d716951f Allow byte streams with unknown type to be compatiable with binary (#12959)
# Description
Currently, this pipeline doesn't work `open --raw file | take 100`,
since the type of the byte stream is `Unknown`, but `take` expects
`Binary` streams. This PR changes commands that expect
`ByteStreamType::Binary` to also work with `ByteStreamType::Unknown`.
This was done by adding two new methods to `ByteStreamType`:
`is_binary_coercible` and `is_string_coercible`. These return true if
the type is `Unknown` or matches the type in the method name.
2024-05-24 17:54:38 -07:00
b06f31d3c6 Make from json --objects streaming (#12949)
# Description

Makes the `from json --objects` command produce a stream, and read
lazily from an input stream to produce its output.

Also added a helper, `PipelineData::get_type()`, to make it easier to
construct a wrong type error message when matching on `PipelineData`. I
expect checking `PipelineData` for either a string value or an `Unknown`
or `String` typed `ByteStream` will be very, very common. I would have
liked to have a helper that just returns a readable stream from either,
but that would either be a bespoke enum or a `Box<dyn BufRead>`, which
feels like it wouldn't be so great for performance. So instead, taking
the approach I did here is probably better - having a function that
accepts the `impl BufRead` and matching to use it.

# User-Facing Changes

- `from json --objects` no longer collects its input, and can be used
for large datasets or streams that produce values over time.

# Tests + Formatting
All passing.

# After Submitting
- [ ] release notes

---------

Co-authored-by: Ian Manske <ian.manske@pm.me>
2024-05-24 23:37:50 +00:00
84b7a99adf Revert "Polars lazy refactor (#12669)" (#12962)
This reverts commit 68adc4657f.

# Description

Reverts the lazyframe refactor (#12669) for the next release, since
there are still a few lingering issues. This temporarily solves #12863
and #12828. After the release, the lazyframes can be added back and
cleaned up.
2024-05-24 18:09:26 -05:00
7d11c28eea Revert "Remove std::env::set_current_dir() call from EngineState::merge_env()" (#12954)
Reverts nushell/nushell#12922
2024-05-24 11:09:59 -05:00
bf07806b1b Use cwd in grid (#12947)
# Description
Fixes #12946. The `grid` command does not use the cwd when trying to get
the icon or color for a file/path.
2024-05-23 20:38:47 +00:00
0b5a4c0d95 explore refactoring+clarification (#12940)
Another very boring PR cleaning up and documenting some of `explore`'s
innards. Mostly renaming things that I found confusing or vague when
reading through the code, also adding some comments.
2024-05-23 08:51:39 -05:00
f53aa6fcbf fix std help (#12943)
# Description
Fixes: #12941

~~The issue is cause by some columns(is_builtin, is_plugin, is_custom,
is_keyword) are removed in #10023~~
Edit: I'm wrong

# Tests + Formatting
Added one test for `std help`
2024-05-23 08:51:02 -05:00
2612a167e3 Remove list support in with-env (#12939)
# Description
Following from #12523, this PR removes support for lists of environments
variables in the `with-env` command. Rather, only records will be
supported now.

# After Submitting
Update examples using the list form in the docs and book.
2024-05-23 13:53:55 +08:00
c7097ca937 explore cleanup: remove+move binary viewer config (#12920)
Small change, removing 4 more configuration options from `explore`'s
binary viewer:

1. `show_index`
2. `show_data`
3. `show_ascii`
4. `show_split`

These controlled whether the 3 columns in the binary viewer (index, hex
data, ASCII) and the pipe separator (`|`) in between them are shown. I
don't think we need this level of configurability until the `explore`
command is more mature, and maybe even not then; we can just show them
all.

I think it's very unlikely that anyone is using these configuration
points.

Also, the row offset (e.g. how many rows we have scrolled down) was
being stored in config/settings when it's arguably not config; more like
internal state of the binary viewer. I moved it to a more appropriate
location and renamed it.
2024-05-22 20:06:14 -07:00
58cf0c56f8 add some completion tests (#12908)
# Description
```nushell
❯ ls
╭───┬───────┬──────┬──────┬──────────╮
│ # │ name  │ type │ size │ modified │
├───┼───────┼──────┼──────┼──────────┤
│ 0 │ a.txt │ file │  0 B │ now      │
╰───┴───────┴──────┴──────┴──────────╯

❯ ls a.
NO RECORDS FOUND
```

There is a completion issue on previous version, I think @amtoine have
reproduced it before. But currently I can't reproduce it on latest main.
To avoid such regression, I added some tests for completion.

---------

Co-authored-by: Antoine Stevan <44101798+amtoine@users.noreply.github.com>
2024-05-23 10:47:06 +08:00
6c649809d3 Rewrite run_external.rs (#12921)
This PR is a complete rewrite of `run_external.rs`. The main goal of the
rewrite is improving readability, but it also fixes some bugs related to
argument handling and the PATH variable (fixes
https://github.com/nushell/nushell/issues/6011).

I'll discuss some technical details to make reviewing easier.

## Argument handling

Quoting arguments for external commands is hard. Like, *really* hard.
We've had more than a dozen issues and PRs dedicated to quoting
arguments (see Appendix) but the current implementation is still buggy.

Here's a demonstration of the buggy behavior:

```nu
let foo = "'bar'"
^touch $foo            # This creates a file named `bar`, but it should be `'bar'`
^touch ...[ "'bar'" ]  # Same
```

I'll describe how this PR deals with argument handling.

First, we'll introduce the concept of **bare strings**. Bare strings are
**string literals** that are either **unquoted** or **quoted by
backticks** [^1]. Strings within a list literal are NOT considered bare
strings, even if they are unquoted or quoted by backticks.

When a bare string is used as an argument to external process, we need
to perform tilde-expansion, glob-expansion, and inner-quotes-removal, in
that order. "Inner-quotes-removal" means transforming from
`--option="value"` into `--option=value`.

## `.bat` files and CMD built-ins

On Windows, `.bat` files and `.cmd` files are considered executable, but
they need `CMD.exe` as the interpreter. The Rust standard library
supports running `.bat` files directly and will spawn `CMD.exe` under
the hood (see
[documentation](https://doc.rust-lang.org/std/process/index.html#windows-argument-splitting)).
However, other extensions are not supported [^2].

Nushell also supports a selected number of CMD built-ins. The problem
with CMD is that it uses a different set of quoting rules. Correctly
quoting for CMD requires using
[Command::raw_arg()](https://doc.rust-lang.org/std/os/windows/process/trait.CommandExt.html#tymethod.raw_arg)
and manually quoting CMD special characters, on top of quoting from the
Nushell side. ~~I decided that this is too complex and chose to reject
special characters in CMD built-ins instead [^3]. Hopefully this will
not affact real-world use cases.~~ I've implemented escaping that works
reasonably well.

## `which-support` feature

The `which` crate is now a hard dependency of `nu-command`, making the
`which-support` feature essentially useless. The `which` crate is
already a hard dependency of `nu-cli`, and we should consider removing
the `which-support` feature entirely.

## Appendix

Here's a list of quoting-related issues and PRs in rough chronological
order.

* https://github.com/nushell/nushell/issues/4609
* https://github.com/nushell/nushell/issues/4631
* https://github.com/nushell/nushell/issues/4601
  * https://github.com/nushell/nushell/pull/5846
* https://github.com/nushell/nushell/issues/5978
  * https://github.com/nushell/nushell/pull/6014
* https://github.com/nushell/nushell/issues/6154
  * https://github.com/nushell/nushell/pull/6161
* https://github.com/nushell/nushell/issues/6399
  * https://github.com/nushell/nushell/pull/6420
  * https://github.com/nushell/nushell/pull/6426
* https://github.com/nushell/nushell/issues/6465
* https://github.com/nushell/nushell/issues/6559
  * https://github.com/nushell/nushell/pull/6560

[^1]: The idea that backtick-quoted strings act like bare strings was
introduced by Kubouch and briefly mentioned in [the language
reference](https://www.nushell.sh/lang-guide/chapters/strings_and_text.html#backtick-quotes).

[^2]: The documentation also said "running .bat scripts in this way may
be removed in the future and so should not be relied upon", which is
another reason to move away from this. But again, quoting for CMD is
hard.

[^3]: If anyone wants to try, the best resource I found on the topic is
[this](https://daviddeley.com/autohotkey/parameters/parameters.htm).
2024-05-23 02:05:27 +00:00
64afb52ffa Fix leftover wrong column name (#12937)
# Description

Small fixup for https://github.com/nushell/nushell/pull/12930
2024-05-22 21:24:22 +00:00
ac4125f8ed fix range semantic in detect_columns, str substring, str index-of (#12894)
# Description
Fixes: https://github.com/nushell/nushell/issues/7761

It's still unsure if we want to change the `range semantic` itself, but
it's good to keep range semantic consistent between nushell commands.

# User-Facing Changes
### Before
```nushell
❯ "abc" | str substring 1..=2
b
```
### After
```nushell
❯ "abc" | str substring 1..=2
bc
```

# Tests + Formatting
Adjust tests to fit new behavior
2024-05-22 20:00:58 +03:00
7ede90cba5 Remove std::env::set_current_dir() call from EngineState::merge_env() (#12922)
As discussed in https://github.com/nushell/nushell/pull/12749, we no
longer need to call `std::env::set_current_dir()` to sync `$env.PWD`
with the actual working directory. This PR removes the call from
`EngineState::merge_env()`.
2024-05-22 19:58:27 +03:00
75689ec98a Small improvements to debug profile (#12930)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

1. With the `-l` flag, `debug profile` now collects files and line
numbers of profiled pipeline elements

![profiler_lines](https://github.com/nushell/nushell/assets/25571562/b400a956-d958-4aff-aa4c-7e65da3f78fa)

2. Error from the profiled closure will be reported instead of silently
ignored.

![profiler_lines_error](https://github.com/nushell/nushell/assets/25571562/54f7ad7a-06a3-4d56-92c2-c3466917bee8)


# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

New `--lines(-l)` flag to `debug profile`. The command will also fail if
the profiled closure fails, so technically it is a breaking change.

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->

---------

Co-authored-by: Ian Manske <ian.manske@pm.me>
2024-05-22 19:56:51 +03:00
7de513a4e0 Implement streaming I/O for CSV and TSV commands (#12918)
# Description

Implements streaming for:

- `from csv`
- `from tsv`
- `to csv`
- `to tsv`

via the new string-typed ByteStream support.

# User-Facing Changes
Commands above. Also:

- `to csv` and `to tsv` now have `--columns <List(String)>`, to provide
the exact columns desired in the output. This is required for them to
have streaming output, because otherwise collecting the entire list is
necessary to determine the output columns. If we introduce
`TableStream`, this may become less necessary.

# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting
- [ ] release notes

---------

Co-authored-by: Ian Manske <ian.manske@pm.me>
2024-05-22 16:55:24 +00:00
758c5d447a Add support for the ps command on FreeBSD, NetBSD, and OpenBSD (#12892)
# Description

I feel like it's a little sad that BSDs get to enjoy almost everything
other than the `ps` command, and there are some tests that rely on this
command, so I figured it would be fun to patch that and make it work.

The different BSDs have diverged from each other somewhat, but generally
have a similar enough API for reading process information via
`sysctl()`, with some slightly different args.

This supports FreeBSD with the `freebsd` module, and NetBSD and OpenBSD
with the `netbsd` module. OpenBSD is a fork of NetBSD and the interface
has some minor differences but many things are the same.

I had wanted to try to support DragonFlyBSD too, but their Rust version
in the latest release is only 1.72.0, which is too old for me to want to
try to compile rustc up to 1.77.2... but I will revisit this whenever
they do update it. Dragonfly is a fork of FreeBSD, so it's likely to be
more or less the same - I just don't want to enable it without testing
it.

Fixes #6862 (partially, we probably won't be adding `zfs list`)

# User-Facing Changes
`ps` added for FreeBSD, NetBSD, and OpenBSD.

# Tests + Formatting
The CI doesn't run tests for BSDs, so I'm not entirely sure if
everything was already passing before. (Frankly, it's unlikely.) But
nothing appears to be broken.

# After Submitting
- [ ] release notes?
- [ ] DragonflyBSD, whenever they do update Rust to something close
enough for me to try it
2024-05-22 08:13:45 -07:00
d7e75c0b70 Bump shadow-rs from 0.27.1 to 0.28.0 (#12932)
Bumps [shadow-rs](https://github.com/baoyachi/shadow-rs) from 0.27.1 to
0.28.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/baoyachi/shadow-rs/releases">shadow-rs's
releases</a>.</em></p>
<blockquote>
<h2>fix cargo clippy</h2>
<p><a
href="https://redirect.github.com/baoyachi/shadow-rs/issues/160">#160</a></p>
<p>Thx <a href="https://github.com/qartik"><code>@​qartik</code></a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ba9f8b0c2b"><code>ba9f8b0</code></a>
Update Cargo.toml</li>
<li><a
href="d1b724c1e7"><code>d1b724c</code></a>
Merge pull request <a
href="https://redirect.github.com/baoyachi/shadow-rs/issues/160">#160</a>
from qartik/patch-1</li>
<li><a
href="505108d5d6"><code>505108d</code></a>
Allow missing_docs for deprecated CLAP_VERSION constant</li>
<li>See full diff in <a
href="https://github.com/baoyachi/shadow-rs/compare/v0.27.1...v0.28.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=shadow-rs&package-manager=cargo&previous-version=0.27.1&new-version=0.28.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-22 15:59:33 +08:00
3cf150727c Bump actions/checkout from 4.1.5 to 4.1.6 (#12934)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.1.5
to 4.1.6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/releases">actions/checkout's
releases</a>.</em></p>
<blockquote>
<h2>v4.1.6</h2>
<h2>What's Changed</h2>
<ul>
<li>Check platform to set archive extension appropriately by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1732">actions/checkout#1732</a></li>
<li>Update for 4.1.6 release by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1733">actions/checkout#1733</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v4.1.5...v4.1.6">https://github.com/actions/checkout/compare/v4.1.5...v4.1.6</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/blob/main/CHANGELOG.md">actions/checkout's
changelog</a>.</em></p>
<blockquote>
<h2>v4.1.6</h2>
<ul>
<li>Check platform to set archive extension appropriately by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1732">actions/checkout#1732</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="a5ac7e51b4"><code>a5ac7e5</code></a>
Update for 4.1.6 release (<a
href="https://redirect.github.com/actions/checkout/issues/1733">#1733</a>)</li>
<li><a
href="24ed1a3528"><code>24ed1a3</code></a>
Check platform for extension (<a
href="https://redirect.github.com/actions/checkout/issues/1732">#1732</a>)</li>
<li>See full diff in <a
href="https://github.com/actions/checkout/compare/v4.1.5...v4.1.6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/checkout&package-manager=github_actions&previous-version=4.1.5&new-version=4.1.6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-22 11:22:02 +08:00
f83439fdda Add completer for std help (#12929)
# Description

While each of the `help <subcommands>` in `std` had completers, there
wasn't one for the main `help` command.

This adds all internals and custom commands (as with `help commands`) as
possible completions.

# User-Facing Changes

`help ` + <kbd>Tab</kbd> will now suggest completions for both the `help
<subcommands>` as well as all internal and custom commands.

# Tests + Formatting

Note: Cannot add tests for completion functions since they are
module-internal and not visible to test cases, that I can see.

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
2024-05-21 10:31:14 -05:00
1cdc39bc2a Update mimalloc to 0.1.42 (#12919)
# Description

This update fixes mimalloc for NetBSD

# Tests + Formatting

Tests are passing just fine
2024-05-21 07:47:24 +02:00
db37bead64 Remove unused dependencies (#12917)
- **Remove unused `pathdiff` dep in `nu-cli`**
- **Remove unused `serde_json` dep on `nu-protocol`**
- Unnecessary after moving the plugin file to msgpack (still a
dev-dependency)
2024-05-21 01:09:28 +00:00
6e050f5634 explore: consolidate padding config, handle ByteStream, tweak naming+comments (#12915)
Some minor changes to `explore`, continuing on my mission to simplify
the command in preparation for a larger UX overhaul:

1. Consolidate padding configuration. I don't think we need separate
config points for the (optional) index column and regular data columns
in the normal pager, they can share padding configuration. Likewise, in
the binary viewer all 3 columns (index, data, ASCII) had their
left+right padding configured independently.
2. Update `explore` so we use the binary viewer for the new `ByteStream`
type. `cat foo.txt | into binary | explore` was not using the binary
viewer after the `ByteStream` changes.
3. Tweak the naming of a few helper functions, add a comment

I've put the changes in separate commits to make them easier to review.

---------

Co-authored-by: Stefan Holderbach <sholderbach@users.noreply.github.com>
2024-05-20 22:03:21 +02:00
905e3d0715 Remove dataframes crate and feature (#12889)
# Description
Removes the old `nu-cmd-dataframe` crate in favor of the polars plugin.
As such, this PR also removes the `dataframe` feature, related CI, and
full releases of nushell.
2024-05-20 17:22:08 +00:00
4f69ba172e add math min and math max to bench command (#12913)
# Description

This PR adds min and max to the bench command.
```nushell
❯ use std bench
❯ bench { dply -c 'parquet("./data.parquet") | group_by(year) | summarize(count = n(), sum = sum(geo_count)) | show()' | complete | null } --rounds 100 --verbose
100 / 100
╭───────┬───────────────────╮
│ mean  │ 71ms 358µs 850ns  │
│ min   │ 66ms 457µs 583ns  │
│ max   │ 120ms 338µs 167ns │
│ std   │ 6ms 553µs 949ns   │
│ times │ [list 100 items]  │
╰───────┴───────────────────╯
```

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-05-20 10:08:03 -05:00
c98960d053 Take owned Read and Write (#12909)
# Description
As @YizhePKU pointed out, the [Rust API
guidelines](https://rust-lang.github.io/api-guidelines/interoperability.html#generic-readerwriter-functions-take-r-read-and-w-write-by-value-c-rw-value)
recommend that generic functions take readers and writers by value and
not by reference. This PR changes `copy_with_interupt` and few other
places to take owned `Read` and `Write` instead of mutable references.
2024-05-20 15:10:36 +02:00
c61075e20e Add string/binary type color to ByteStream (#12897)
# Description

This PR allows byte streams to optionally be colored as being
specifically binary or string data, which guarantees that they'll be
converted to `Binary` or `String` appropriately on `into_value()`,
making them compatible with `Type` guarantees. This makes them
significantly more broadly usable for command input and output.

There is still an `Unknown` type for byte streams coming from external
commands, which uses the same behavior as we previously did where it's a
string if it's UTF-8.

A small number of commands were updated to take advantage of this, just
to prove the point. I will be adding more after this merges.

# User-Facing Changes
- New types in `describe`: `string (stream)`, `binary (stream)`
- These commands now return a stream if their input was a stream:
  - `into binary`
  - `into string`
  - `bytes collect`
  - `str join`
  - `first` (binary)
  - `last` (binary)
  - `take` (binary)
  - `skip` (binary)
- Streams that are explicitly binary colored will print as a streaming
hexdump
  - example:
    ```nushell
    1.. | each { into binary } | bytes collect
    ```

# Tests + Formatting
I've added some tests to cover it at a basic level, and it doesn't break
anything existing, but I do think more would be nice. Some of those will
come when I modify more commands to stream.

# After Submitting
There are a few things I'm not quite satisfied with:

- **String trimming behavior.** We automatically trim newlines from
streams from external commands, but I don't think we should do this with
internal commands. If I call a command that happens to turn my string
into a stream, I don't want the newline to suddenly disappear. I changed
this to specifically do it only on `Child` and `File`, but I don't know
if this is quite right, and maybe we should bring back the old flag for
`trim_end_newline`
- **Known binary always resulting in a hexdump.** It would be nice to
have a `print --raw`, so that we can put binary data on stdout
explicitly if we want to. This PR doesn't change how external commands
work though - they still dump straight to stdout.

Otherwise, here's the normal checklist:

- [ ] release notes
- [ ] docs update for plugin protocol changes (added `type` field)

---------

Co-authored-by: Ian Manske <ian.manske@pm.me>
2024-05-20 00:35:32 +00:00
baeba19b22 Make get_full_help take &dyn Command (#12903)
# Description
Changes `get_full_help` to take a `&dyn Command` instead of multiple
arguments (`&Signature`, `&Examples` `is_parser_keyword`). All of these
arguments can be gathered from a `Command`, so there is no need to pass
the pieces to `get_full_help`.

This PR also fixes an issue where the search terms are not shown if
`--help` is used on a command.
2024-05-19 19:56:33 +02:00
474293bf1c Clear environment for child Commands (#12901)
# Description
There is a bug when `hide-env` is used on environment variables that
were present at shell startup. Namely, child processes still inherit the
hidden environment variable. This PR fixes #12900, fixes #11495, and
fixes #7937.

# Tests + Formatting
Added a test.
2024-05-19 15:35:07 +00:00
cc9f41e553 Use CommandType in more places (#12832)
# Description
Kind of a vague title, but this PR does two main things:
1. Rather than overriding functions like `Command::is_parser_keyword`,
this PR instead changes commands to override `Command::command_type`.
The `CommandType` returned by `Command::command_type` is then used to
automatically determine whether `Command::is_parser_keyword` and the
other `is_{type}` functions should return true. These changes allow us
to remove the `CommandType::Other` case and should also guarantee than
only one of the `is_{type}` functions on `Command` will return true.
2. Uses the new, reworked `Command::command_type` function in the `scope
commands` and `which` commands.


# User-Facing Changes
- Breaking change for `scope commands`: multiple columns (`is_builtin`,
`is_keyword`, `is_plugin`, etc.) have been merged into the `type`
column.
- Breaking change: the `which` command can now report `plugin` or
`keyword` instead of `built-in` in the `type` column. It may also now
report `external` instead of `custom` in the `type` column for known
`extern`s.
2024-05-18 23:37:31 +00:00
580c60bb82 Preserve metadata in more places (#12848)
# Description
This PR makes some commands and areas of code preserve pipeline
metadata. This is in an attempt to make the issue described in #12599
and #9456 less likely to occur. That is, reading and writing to the same
file in a pipeline will result in an empty file. Since we preserve
metadata in more places now, there will be a higher chance that we
successfully detect this error case and abort the pipeline.
2024-05-17 17:59:32 +00:00
c10aa2cf09 collect: don't require a closure (#12788)
# Description

This changes the `collect` command so that it doesn't require a closure.
Still allowed, optionally.

Before:

```nushell
open foo.json | insert foo bar | collect { save -f foo.json }
```

After:

```nushell
open foo.json | insert foo bar | collect | save -f foo.json
```

The closure argument isn't really necessary, as collect values are also
supported as `PipelineData`.

# User-Facing Changes
- `collect` command changed

# Tests + Formatting
Example changed to reflect.

# After Submitting
- [ ] release notes
- [ ] we may want to deprecate the closure arg?
2024-05-17 18:46:03 +02:00
e3db6ea04a Exclude polars from ensure_plugins_built(), for performance reasons (#12896)
# Description

We have been building `nu_plugin_polars` unnecessarily during `cargo
test`, which is very slow. All of its tests are run within its own
crate, which happens during the plugins CI phase.

This should speed up the CI a bit.
2024-05-17 15:04:59 +00:00
59f7c523fa Fix the way the output of table is printed in print() (#12895)
# Description

Forgot that I fixed this already on my branch, but when printing without
a display output hook, the implicit call to `table` gets its output
mangled with newlines (since #12774). This happens when running `nu -c`
or a script file.

Here's that fix in one PR so it can be merged easily.

# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
2024-05-17 07:18:18 -07:00
8adf3406e5 allow define it as a variable inside closure (#12888)
# Description
Fixes: #12690 

The issue is happened after
https://github.com/nushell/nushell/pull/12056 is merged. It will raise
error if user doesn't supply required parameter when run closure with
do.
And parser adds a `$it` parameter when parsing closure or block
expression.

I believe the previous behavior is because we allow such syntax on
previous version(0.44):
```nushell
let x = { print $it }
```
But it's no longer allowed after 0.60.  So I think they can be removed.

# User-Facing Changes
```nushell
let tmp = {
  let it = 42
  print $it
}

do -c $tmp
```
should be possible again.

# Tests + Formatting
Added 1 test
2024-05-17 00:03:13 +00:00
6891267b53 Support ByteStreams in bytes starts-with and bytes ends-with (#12887)
# Description
Restores `bytes starts-with` so that it is able to work with byte
streams once again. For parity/consistency, this PR also adds byte
stream support to `bytes ends-with`.

# User-Facing Changes
- `bytes ends-with` now supports byte streams.

# Tests + Formatting
Re-enabled tests for `bytes starts-with` and added tests for `bytes
ends-with`.
2024-05-17 07:59:08 +08:00
aec41f3df0 Add Span merging functions (#12511)
# Description
This PR adds a few functions to `Span` for merging spans together:
- `Span::append`: merges two spans that are known to be in order.
- `Span::concat`: returns a span that encompasses all the spans in a
slice. The spans must be in order.
- `Span::merge`: merges two spans (no order necessary).
- `Span::merge_many`: merges an iterator of spans into a single span (no
order necessary).

These are meant to replace the free-standing `nu_protocol::span`
function.

The spans in a `LiteCommand` (the `parts`) should always be in order
based on the lite parser and lexer. So, the parser code sees the most
usage of `Span::append` and `Span::concat` where the order is known. In
other code areas, `Span::merge` and `Span::merge_many` are used since
the order between spans is often not known.
2024-05-16 22:34:49 +00:00
2a09dccc11 Bytestream touchup (#12886)
# Description
Adds some docs and a small fix to `Chunks`.
2024-05-16 21:15:20 +00:00
1c00a6ca5e sync up with reedline changes (#12881)
# Description

sync-up nushell to reedline's latest minor changes. Not quite sure why
itertools downgraded to 0.11.0 when nushell and reedline have it set to
0.12.0.
2024-05-16 22:26:03 +02:00
6fd854ed9f Replace ExternalStream with new ByteStream type (#12774)
# Description
This PR introduces a `ByteStream` type which is a `Read`-able stream of
bytes. Internally, it has an enum over three different byte stream
sources:
```rust
pub enum ByteStreamSource {
    Read(Box<dyn Read + Send + 'static>),
    File(File),
    Child(ChildProcess),
}
```

This is in comparison to the current `RawStream` type, which is an
`Iterator<Item = Vec<u8>>` and has to allocate for each read chunk.

Currently, `PipelineData::ExternalStream` serves a weird dual role where
it is either external command output or a wrapper around `RawStream`.
`ByteStream` makes this distinction more clear (via `ByteStreamSource`)
and replaces `PipelineData::ExternalStream` in this PR:
```rust
pub enum PipelineData {
    Empty,
    Value(Value, Option<PipelineMetadata>),
    ListStream(ListStream, Option<PipelineMetadata>),
    ByteStream(ByteStream, Option<PipelineMetadata>),
}
```

The PR is relatively large, but a decent amount of it is just repetitive
changes.

This PR fixes #7017, fixes #10763, and fixes #12369.

This PR also improves performance when piping external commands. Nushell
should, in most cases, have competitive pipeline throughput compared to,
e.g., bash.
| Command | Before (MB/s) | After (MB/s) | Bash (MB/s) |
| -------------------------------------------------- | -------------:|
------------:| -----------:|
| `throughput \| rg 'x'` | 3059 | 3744 | 3739 |
| `throughput \| nu --testbin relay o> /dev/null` | 3508 | 8087 | 8136 |

# User-Facing Changes
- This is a breaking change for the plugin communication protocol,
because the `ExternalStreamInfo` was replaced with `ByteStreamInfo`.
Plugins now only have to deal with a single input stream, as opposed to
the previous three streams: stdout, stderr, and exit code.
- The output of `describe` has been changed for external/byte streams.
- Temporary breaking change: `bytes starts-with` no longer works with
byte streams. This is to keep the PR smaller, and `bytes ends-with`
already does not work on byte streams.
- If a process core dumped, then instead of having a `Value::Error` in
the `exit_code` column of the output returned from `complete`, it now is
a `Value::Int` with the negation of the signal number.

# After Submitting
- Update docs and book as necessary
- Release notes (e.g., plugin protocol changes)
- Adapt/convert commands to work with byte streams (high priority is
`str length`, `bytes starts-with`, and maybe `bytes ends-with`).
- Refactor the `tee` code, Devyn has already done some work on this.

---------

Co-authored-by: Devyn Cairns <devyn.cairns@gmail.com>
2024-05-16 07:11:18 -07:00
1b8eb23785 allow passing float value to custom command (#12879)
# Description
Fixes: #12691 

In `parse_short_flag`, it only checks special cases for
`SyntaxShape::Int`, `SyntaxShape::Number` to allow a flag to be a
number. This pr adds `SyntaxShape::Float` to allow a flag to be float
number.

# User-Facing Changes
This is possible after this pr:
```nushell
def spam [val: float] { $val }; 
spam -1.4
```

# Tests + Formatting
Added 1 test
2024-05-16 10:50:29 +02:00
e20113a0eb Remove stack debug assert (#12861)
# Description
In order for `Stack::unwrap_unique` to work as intended, we currently
manually track all references to the parent stack and ensure that they
are cleared before calling `Stack::unwrap_unique` in the REPL. We also
only call `Stack::unwrap_unique` after all code from the current REPL
entry has finished executing. Since `Value`s cannot store `Stack`
references, then this should have worked in theory. However, we forgot
to account for threads. `run-external` (and maybe the plugin writers)
can spawn threads that clone the `Stack`, holding on to references of
the parent stack. These threads are not waited/joined upon, and so may
finish after the eval has already returned. This PR removes the
`Stack::unwrap_unique` function and associated debug assert that was
[causing
panics](https://gist.github.com/cablehead/f3d2608a1629e607c2d75290829354f7)
like @cablehead found.

# After Submitting
Make values cheaper to clone as a more robust solution to the
performance issues with cloning the stack.

---------

Co-authored-by: Wind <WindSoilder@outlook.com>
2024-05-15 22:59:10 +00:00
6f3dbc97bb fixed syntax shape requirements for --quantiles option for polars summary (#12878)
Fix for #12730

All of the code expected a list of floats, but the syntax shape expected
a table. Resolved by changing the syntax shape to list of floats.

cc: @maxim-uvarov
2024-05-15 16:55:07 -05:00
06fe7d1e16 Remove usages of Call::positional_nth (#12871)
# Description
Following from #12867, this PR replaces usages of `Call::positional_nth`
with existing spans. This removes several `expect`s from the code.

Also remove unused `positional_nth_mut` and `positional_iter_mut`
2024-05-15 19:59:42 +02:00
b08135d877 Fixed small error in the help-examples for the get command (#12877)
# Description

Another small error in Help, this time for the `get` command example.

# User-Facing Changes

Help only
2024-05-15 19:49:08 +02:00
72b880662b Fixed a nitpick usage-help error - closure v. block (#12876)
# Description

So minor, but had to be fixed sometime. `help each while` used the term
"block" in the "usage", but the argument type is a closure.

# User-Facing Changes

help-only
2024-05-15 18:16:59 +02:00
defed3001d make it clearer what is being loaded with --log-level info (#12875)
# Description

A common question we get is what config files are loaded when and with
what parameters. It's for this reason that I wrote [this
gist](https://gist.github.com/fdncred/b87b784f04984dc31a150baed9ad2447).
Another way to figure this out is to use `nu --log-level info`. This
will show some performance timings but will also show what is being
loaded when. For the most part the `[INFO]` lines show the performance
timings and the `[WARN]` lines show the files.

This PR tries to make things a little bit clearer when using the
`--log-level info` parameter.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-05-15 09:44:09 -05:00
0cfbdc909e Fix sys panic (#12846)
# Description
This should fix #10155 where the `sys` command can panic due to date
math in certain cases / on certain systems.

# User-Facing Changes
The `boot_time` column now has a date value instead of a formatted date
string. This is technically a breaking change.
2024-05-15 15:40:04 +08:00
a7807735b1 Add a passing test for interactivity on slow pipelines (#12865)
# Description

This PR adds a single test to assert interactivity on slow pipelines

Currently the timeout is set to 6 seconds, as the test can sometimes
take ~3secs to run on my local m1 mac air, which I don't think is an
indication of a slow pipeline, but rather slow test start up time...
2024-05-15 01:48:27 +00:00
155934f783 make better messages for incomplete string (#12868)
# Description
Fixes: #12795

The issue is caused by an empty position of `ParseError::UnexpectedEof`.
So no detailed message is displayed.
To fix the issue, I adjust the start of span to `span.end - 1`. In this
way, we can make sure that it never points to an empty position.

After lexing item, I also reorder the unclosed character checking . Now
it will be checking unclosed opening delimiters first.

# User-Facing Changes
After this pr, it outputs detailed error message for incomplete string
when running scripts.

## Before
```
❯ nu -c "'ab"
Error: nu::parser::unexpected_eof

  × Unexpected end of code.
   ╭─[source:1:4]
 1 │ 'ab
   ╰────
> ./target/debug/nu -c "r#'ab"
Error: nu::parser::unexpected_eof

  × Unexpected end of code.
   ╭─[source:1:6]
 1 │ r#'ab
   ╰────
```
## After
```
> nu -c "'ab"
Error: nu::parser::unexpected_eof

  × Unexpected end of code.
   ╭─[source:1:3]
 1 │ 'ab
   ·   ┬
   ·   ╰── expected closing '
   ╰────
> ./target/debug/nu -c "r#'ab"
Error: nu::parser::unexpected_eof

  × Unexpected end of code.
   ╭─[source:1:5]
 1 │ r#'ab
   ·     ┬
   ·     ╰── expected closing '#
   ╰────
```


# Tests + Formatting
Added some tests for incomplete string.

---------

Co-authored-by: Ian Manske <ian.manske@pm.me>
2024-05-15 01:14:11 +00:00
9bf4d3ece6 Bump rust-embed from 8.3.0 to 8.4.0 (#12870)
Bumps [rust-embed](https://github.com/pyros2097/rust-embed) from 8.3.0
to 8.4.0.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pyrossh/rust-embed/blob/master/changelog.md">rust-embed's
changelog</a>.</em></p>
<blockquote>
<h2>[8.4.0] - 2024-05-11</h2>
<ul>
<li>Re-export RustEmbed as Embed <a
href="https://redirect.github.com/pyrossh/rust-embed/pull/245/files">#245</a>.
Thanks to <a href="https://github.com/pyrossh">pyrossh</a></li>
<li>Do not build glob matchers repeatedly when include-exclude feature
is enabled <a
href="https://redirect.github.com/pyrossh/rust-embed/pull/244/files">#244</a>.
Thanks to <a href="https://github.com/osiewicz">osiewicz</a></li>
<li>Add <code>metadata_only</code> attribute <a
href="https://redirect.github.com/pyrossh/rust-embed/pull/241/files">#241</a>.
Thanks to <a href="https://github.com/ddfisher">ddfisher</a></li>
<li>Replace <code>expect</code> with a safer alternative that returns
<code>None</code> instead <a
href="https://redirect.github.com/pyrossh/rust-embed/pull/240/files">#240</a>.
Thanks to <a href="https://github.com/costinsin">costinsin</a></li>
<li>Eliminate unnecessary <code>to_path</code> call <a
href="https://redirect.github.com/pyrossh/rust-embed/pull/239/files">#239</a>.
Thanks to <a href="https://github.com/smoelius">smoelius</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/pyros2097/rust-embed/commits">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=rust-embed&package-manager=cargo&previous-version=8.3.0&new-version=8.4.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-15 09:06:09 +08:00
cb64c78a3b Bump interprocess from 2.0.1 to 2.1.0 (#12869)
Bumps [interprocess](https://github.com/kotauskas/interprocess) from
2.0.1 to 2.1.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/kotauskas/interprocess/releases">interprocess's
releases</a>.</em></p>
<blockquote>
<h2>2.1.0 – listeners are now iterators</h2>
<ul>
<li>Fixes <a
href="https://redirect.github.com/kotauskas/interprocess/issues/49">#49</a></li>
<li>Adds <code>Iterator</code> impl on local socket listeners (closes <a
href="https://redirect.github.com/kotauskas/interprocess/issues/64">#64</a>)</li>
<li>Miscellaneous documentation fixes</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b79d363615"><code>b79d363</code></a>
Thank you Windows, very cool</li>
<li><a
href="2ab1418db7"><code>2ab1418</code></a>
Move a bunch of goalposts</li>
<li><a
href="d26ed39bd2"><code>d26ed39</code></a>
Use macro in Windows <code>UnnamedPipe</code> builder</li>
<li><a
href="f9528885e4"><code>f952888</code></a>
I'm not adding a <code>build.rs</code> to silence a warning</li>
<li><a
href="5233ffb7c9"><code>5233ffb</code></a>
Fix <a
href="https://redirect.github.com/kotauskas/interprocess/issues/49">#49</a>
and add test</li>
<li><a
href="85d3e1861a"><code>85d3e18</code></a>
Complete half-done move to <code>os</code> module in tests</li>
<li><a
href="7715dbdd49"><code>7715dbd</code></a>
Listeners are now iterators</li>
<li><a
href="8a47261ddc"><code>8a47261</code></a>
Bump version</li>
<li>See full diff in <a
href="https://github.com/kotauskas/interprocess/compare/2.0.1...2.1.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=interprocess&package-manager=cargo&previous-version=2.0.1&new-version=2.1.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-15 09:05:55 +08:00
c3da44cbb7 Fix char panic (#12867)
# Description
The `char` command can panic due to a failed `expect`: `char --integer
...[77 78 79]`

This PR fixes the panic for the `--integer` flag and also the
`--unicode` flag.

# After Submitting
Check other commands and places where similar bugs can occur due to
usages of `Call::positional_nth` and related methods.
2024-05-14 21:10:06 +00:00
aa46bc97b3 Search terms for compact command (#12864)
# Description

There was a question in Discord today about how to remove empty rows
from a table. The user found the `compact` command on their own, but I
realized that there were no search terms on the command. I've added
'empty' and 'remove', although I subsequently figured out that 'empty'
is found in the "usage" anyway. That said, I don't think it hurts to
have good search terms behind it regardless.

# User-Facing Changes

Just the help

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting
2024-05-14 09:21:50 -05:00
2ed77aef1d Fix panic when exploring empty dictionary (#12860)
- fixes #12841 

# Description
Add boundary checks to ensure that the row and column chosen in
RecordView are not over the length of the possible row and columns. If
we are out of bounds, we default to Value::nothing.

# Tests + Formatting
Tests ran and formatting done
2024-05-14 14:13:49 +00:00
cd381b74e0 Fix improperly escaped strings in stor insert (#12820)
- fixes #12764 

Replaced the custom logic with values_to_sql method that is already used
in crate::database.
This will ensure that handling of parameters is the same between sqlite
and stor.
2024-05-13 20:22:39 -05:00
98369985b1 Allow custom value operations to work on eager and lazy dataframes interchangeably. (#12819)
Fixes Bug #12809 

The example that @maxim-uvarov posted now works as expected:

<img width="1223" alt="Screenshot 2024-05-09 at 16 21 01"
src="https://github.com/nushell/nushell/assets/56345/a4df62e3-e432-4c09-8e25-9a6c198741a3">
2024-05-13 18:17:31 -05:00
aaf973bbba Add Stack::stdout_file and Stack::stderr_file to capture stdout/-err of external commands (#12857)
# Description
In this PR I added two new methods to `Stack`, `stdout_file` and
`stderr_file`. These two modify the inner `StackOutDest` and set a
`File` into the `stdout` and `stderr` respectively. Different to the
`push_redirection` methods, these do not require to hold a guard up all
the time but require ownership of the stack.

This is primarly useful for applications that use `nu` as a language but
not the `nushell`.

This PR replaces my first attempt #12851 to add a way to capture
stdout/-err of external commands. Capturing the stdout without having to
write into a file is possible with crates like
[`os_pipe`](https://docs.rs/os_pipe), an example for this is given in
the doc comment of the `stdout_file` command and can be executed as a
doctest (although it doesn't validate that you actually got any data).

This implementation takes `File` as input to make it easier to implement
on different operating systems without having to worry about
`OwnedHandle` or `OwnedFd`. Also this doesn't expose any use `os_pipe`
to not leak its types into this API, making it depend on it.

As in my previous attempt, @IanManske guided me here.

# User-Facing Changes
This change has no effect on `nushell` and therefore no user-facing
changes.

# Tests + Formatting
This only exposes a new way of using already existing code and has
therefore no further testing. The doctest succeeds on my machine at
least (x86 Windows, 64 Bit).

# After Submitting
All the required documentation is already part of this PR.
2024-05-13 18:48:38 +00:00
905ec88091 Update PR template (#12838)
# Description
Updates the command listed in the PR template to test the standard
library, following from #11151.
2024-05-13 08:45:44 -05:00
c4dca5fe03 Merged tests to produce a single binary (#12826)
This PR should close #7147 

# Description
Merged src/tests into /tests to produce a single binary.

![image](https://github.com/nushell/nushell/assets/94604837/84726469-d447-4619-b6d1-2d1415d0f42e)

# User-Facing Changes
No user facing changes

# Tests + Formatting
Moved tests. Tollkit check pr pass.

# After Submitting

---------

Co-authored-by: Ian Manske <ian.manske@pm.me>
2024-05-13 13:37:53 +00:00
c70c43aae9 Add example and search term for 'repeat' to the fill command (#12844)
# Description

It's commonly forgotten or overlooked that a lot of `std repeat`
functionality can be handled with the built-in `fill`. Added 'repeat` as
a search term for `fill` to improve discoverability.

Also replaced one of the existing examples with one `fill`ing an empty
string, a la `repeat`. There were 6 examples already, and 3 of them
pretty much were variations on the same theme, so I repurposed one of
those rather than adding a 7th.

# User-Facing Changes

Changes to `help` only

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`

# After Submitting

I assume the "Commands" doc is auto-generated from the `help`, but I'll
double-check that assumption.
2024-05-12 20:55:07 -05:00
30fc832035 Fix custom converters with save (#12833)
# Description
Fixes #10429 where `save` fails if a custom command is used as the file
format converter.

# Tests + Formatting
Added a test.
2024-05-12 13:19:28 +02:00
075535f869 remove --not flag for 'str contains' (#12837)
# Description
This PR resolves an inconsistency between different `str` subcommands,
notably `str contains`, `str starts-with` and `str ends-with`. Only the
`str contains` command has the `--not` flag and a desicion was made in
this #12781 PR to remove the `--not` flag and use the `not` operator
instead.

Before:
`"blob" | str contains --not o`
After:
`not ("blob" | str contains o)` OR `"blob" | str contains o | not $in`

> Note, you can currently do all three, but the first will be broken
after this PR is merged.

# User-Facing Changes
- remove `--not(-n)` flag from `str contains` command
  - This is a breaking change!

# Tests + Formatting
- [x] Added tests
- [x] Ran `cargo fmt --all`
- [x] Ran `cargo clippy --workspace -- -D warnings -D
clippy::unwrap_used`
- [x] Ran `cargo test --workspace`
- [ ] Ran `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"`
    - I was unable to get this working.
```
Error: nu::parser::export_not_found

  × Export not found.
   ╭─[source:1:9]
 1 │ use std testing; testing run-tests --path crates/nu-std
   ·         ───┬───
   ·            ╰── could not find imports
   ╰────
```
^ I still can't figure out how to make this work 😂 

# After Submitting
Requires update of documentation
2024-05-11 23:13:36 +00:00
cab86f49c0 Fix pipe redirection into complete (#12818)
# Description
Fixes #12796 where a combined out and err pipe redirection (`o+e>|`)
into `complete` still provides separate `stdout` and `stderr` columns in
the record. Now, the combined output will be in the `stdout` column.
This PR also fixes a similar error with the `e>|` pipe redirection.

# Tests + Formatting
Added two tests.
2024-05-11 15:32:00 +00:00
b9a7faad5a Implement PWD recovery (#12779)
This PR has two parts. The first part is the addition of the
`Stack::set_pwd()` API. It strips trailing slashes from paths for
convenience, but will reject otherwise bad paths, leaving PWD in a good
state. This should reduce the impact of faulty code incorrectly trying
to set PWD.
(https://github.com/nushell/nushell/pull/12760#issuecomment-2095393012)

The second part is implementing a PWD recovery mechanism. PWD can become
bad even when we did nothing wrong. For example, Unix allows you to
remove any directory when another process might still be using it, which
means PWD can just "disappear" under our nose. This PR makes it possible
to use `cd` to reset PWD into a good state. Here's a demonstration:

```sh
mkdir /tmp/foo
cd /tmp/foo

# delete "/tmp/foo" in a subshell, because Nushell is smart and refuse to delete PWD
nu -c 'cd /; rm -r /tmp/foo'

ls          # Error:   × $env.PWD points to a non-existent directory
            # help: Use `cd` to reset $env.PWD into a good state

cd /
pwd         # prints /
```

Also, auto-cd should be working again.
2024-05-10 11:06:33 -05:00
70c01bbb26 Fix raw strings as external argument (#12817)
# Description
As discovered by @YizhePKU in a
[comment](https://github.com/nushell/nushell/pull/9956#issuecomment-2103123797)
in #9956, raw strings are not parsed properly when they are used as an
argument to an external command. This PR fixes that.

# Tests + Formatting
Added a test.
2024-05-10 07:50:31 +08:00
72d3860d05 Refactor the CLI code a bit (#12782)
# Description
Refactors the code in `nu-cli`, `main.rs`, `run.rs`, and few others.
Namely, I added `EngineState::generate_nu_constant` function to
eliminate some duplicate code. Otherwise, I changed a bunch of areas to
return errors instead of calling `std::process::exit`.

# User-Facing Changes
Should be none.
2024-05-10 07:29:27 +08:00
1b2e680059 Fix syntax highlighting for not (#12815)
# Description
Fixes #12813 where a panic occurs when syntax highlighting `not`. Also
fixes #12814 where syntax highlighting for `not` no longer works.

# User-Facing Changes
Bug fix.
2024-05-10 07:09:44 +08:00
7271ad7909 Pass Stack ref to Completer::fetch (#12783)
# Description
Adds an additional `&Stack` parameter to `Completer::fetch` so that the
completers don't have to store a `Stack` themselves. I also removed
unnecessary `EngineState`s from the completers, since the same
`EngineState` is available in the `working_set.permanent_state` also
passed to `Completer::fetch`.
2024-05-09 13:38:24 +08:00
3b3f48202c Refactor message printing in rm (#12799)
# Description
Changes the iterator in `rm` to be an iterator over
`Result<Option<String>, ShellError>` (an optional message or error)
instead of an iterator over `Value`. Then, the iterator is consumed and
each message is printed. This allows the
`PipelineData::print_not_formatted` method to be removed.
2024-05-09 13:36:47 +08:00
948b299e65 Fix/simplify cwd in benchmarks (#12812)
# Description
The benchmarks currently panic when trying to set the initial CWD. This
is because the code that sets the CWD also tries to get the CWD.
2024-05-08 19:16:57 -07:00
ba6f38510c Shrink Value by boxing Range/Closure (#12784)
# Description
On 64-bit platforms the current size of `Value` is 56 bytes. The
limiting variants were `Closure` and `Range`. Boxing the two reduces the
size of Value to 48 bytes. This is the minimal size possible with our
current 16-byte `Span` and any 24-byte `Vec` container which we use in
several variants. (Note the extra full 8-bytes necessary for the
discriminant or other smaller values due to the 8-byte alignment of
`usize`)

This is leads to a size reduction of ~15% for `Value` and should overall
be beneficial as both `Range` and `Closure` are rarely used compared to
the primitive types or even our general container types.

# User-Facing Changes
Less memory used, potential runtime benefits.

(Too late in the evening to run the benchmarks myself right now)
2024-05-09 08:10:58 +08:00
92831d7efc feat: add an echo command to nu_plugin_example (#12754)
# Description

This PR adds a new `echo` command to the `nu_plugin_example` plugin that
simply [streams all of its input to its
output](https://github.com/nushell/nushell/pull/12754/files#diff-de9fcf086b8c373039dadcc2bcb664c6014c0b2af8568eab68c0b6666ac5ccceR47).

```
: "hi" | example echo
hi
```

The motivation for adding it is to have a convenient command to exercise
interactivity on slow pipelines.

I'll follow up on that front with [another
PR](https://github.com/cablehead/nushell/pull/1/files)

# Tests + Formatting

https://github.com/nushell/nushell/pull/12754/files#diff-de9fcf086b8c373039dadcc2bcb664c6014c0b2af8568eab68c0b6666ac5ccceR51-R55
2024-05-08 12:45:44 -07:00
5466da3b52 cleanup osc calls for shell_integration (#12810)
# Description

This PR is a continuation of #12629 and meant to address [Reilly's
stated
issue](https://github.com/nushell/nushell/pull/12629#issuecomment-2099660609).

With this PR, nushell should work more consistently with WezTerm on
Windows. However, that means continued scrolling with typing if osc133
is enabled. If it's possible to run WezTerm inside of vscode, then
having osc633 enabled will also cause the display to scroll with every
character typed. I think the cause of this is that reedline paints the
entire prompt on each character typed. We need to figure out how to fix
that, but that's in reedline.

For my purposes, I keep osc133 and osc633 set to true and don't use
WezTerm on Windows.

Thanks @rgwood for reporting the issue. I found several logic errors.
It's often good to come back to PRs and look at them with fresh eyes. I
think this is pretty close to logically correct now. However, I'm
approaching burn out on ansi escape codes so i could've missed
something.

Kudos to [escape-artist](https://github.com/rgwood/escape-artist) for
helping me debug an ansi escape codes that are actually being sent to
the terminal. It was an invaluable tool.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-05-08 13:34:04 -05:00
3b26c08dab Refactor parse command (#12791)
# Description
- Switches the `excess` in the `ParserStream` and
`ParseStreamerExternal` types from a `Vec` to a `VecDeque`
- Removes unnecessary clones to `stream_helper`
- Other simplifications and loop restructuring
- Merges the `ParseStreamer` and `ParseStreamerExternal` types into a
common `ParseIter`
- `parse` now streams for list values
2024-05-08 06:50:58 -05:00
e462b6cd99 Make the message when running a plugin exe directly clearer (#12806)
# Description

This changes the message that shows up when running a plugin executable
directly rather than as a plugin to direct the user to run `plugin add
--help`, which should have enough information to figure out what's going
on. The message previously just vaguely suggested that the user needs to
run the plugin "from within Nushell", which is not really enough - it
has to be added with `plugin add` to be used as a plugin.

Also fix docs for `plugin add` to mention `plugin use` rather than
`register` (oops)
2024-05-07 20:12:32 -07:00
f851b61cb7 Bump softprops/action-gh-release from 2.0.4 to 2.0.5 (#12803)
Bumps
[softprops/action-gh-release](https://github.com/softprops/action-gh-release)
from 2.0.4 to 2.0.5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/softprops/action-gh-release/releases">softprops/action-gh-release's
releases</a>.</em></p>
<blockquote>
<h2>v2.0.5</h2>
<ul>
<li>Factor in file names with spaces when upserting files <a
href="https://redirect.github.com/softprops/action-gh-release/pull/446">#446</a>
via <a
href="https://github.com/MystiPanda"><code>@​MystiPanda</code></a></li>
<li>Improvements to error handling <a
href="https://redirect.github.com/softprops/action-gh-release/pull/449">#449</a>
via <a href="https://github.com/till"><code>@​till</code></a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md">softprops/action-gh-release's
changelog</a>.</em></p>
<blockquote>
<h2>2.0.5</h2>
<ul>
<li>Factor in file names with spaces when upserting files <a
href="https://redirect.github.com/softprops/action-gh-release/pull/446">#446</a>
via <a
href="https://github.com/MystiPanda"><code>@​MystiPanda</code></a></li>
<li>Improvements to error handling <a
href="https://redirect.github.com/softprops/action-gh-release/pull/449">#449</a>
via <a href="https://github.com/till"><code>@​till</code></a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="69320dbe05"><code>69320db</code></a>
update changelog</li>
<li><a
href="9771ccf55f"><code>9771ccf</code></a>
update changelog rebuild dist</li>
<li><a
href="0a76e4214a"><code>0a76e42</code></a>
Fix: error handling (<a
href="https://redirect.github.com/softprops/action-gh-release/issues/449">#449</a>)</li>
<li><a
href="3989e4b325"><code>3989e4b</code></a>
document impl detail</li>
<li><a
href="72e945e627"><code>72e945e</code></a>
update changelog</li>
<li><a
href="40bf9ec7aa"><code>40bf9ec</code></a>
fmt and build</li>
<li><a
href="998623f0c3"><code>998623f</code></a>
fix: support space in file name (<a
href="https://redirect.github.com/softprops/action-gh-release/issues/446">#446</a>)</li>
<li><a
href="0979303f02"><code>0979303</code></a>
Fix failure (<a
href="https://redirect.github.com/softprops/action-gh-release/issues/447">#447</a>)</li>
<li><a
href="9b795e5782"><code>9b795e5</code></a>
Update README.md (<a
href="https://redirect.github.com/softprops/action-gh-release/issues/432">#432</a>)</li>
<li>See full diff in <a
href="https://github.com/softprops/action-gh-release/compare/v2.0.4...v2.0.5">compare
view</a></li>
</ul>
</details>
<br />

<details>
<summary>Most Recent Ignore Conditions Applied to This Pull
Request</summary>

| Dependency Name | Ignore Conditions |
| --- | --- |
| softprops/action-gh-release | [< 0.2, > 0.1.13] |
</details>


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=softprops/action-gh-release&package-manager=github_actions&previous-version=2.0.4&new-version=2.0.5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-08 10:35:18 +08:00
cad22bb833 Bump actions/checkout from 4.1.4 to 4.1.5 (#12804)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.1.4
to 4.1.5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/releases">actions/checkout's
releases</a>.</em></p>
<blockquote>
<h2>v4.1.5</h2>
<h2>What's Changed</h2>
<ul>
<li>Update NPM dependencies by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1703">actions/checkout#1703</a></li>
<li>Bump github/codeql-action from 2 to 3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1694">actions/checkout#1694</a></li>
<li>Bump actions/setup-node from 1 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1696">actions/checkout#1696</a></li>
<li>Bump actions/upload-artifact from 2 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1695">actions/checkout#1695</a></li>
<li>README: Suggest <code>user.email</code> to be
<code>41898282+github-actions[bot]@users.noreply.github.com</code> by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1707">actions/checkout#1707</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v4.1.4...v4.1.5">https://github.com/actions/checkout/compare/v4.1.4...v4.1.5</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/blob/main/CHANGELOG.md">actions/checkout's
changelog</a>.</em></p>
<blockquote>
<h1>Changelog</h1>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="44c2b7a8a4"><code>44c2b7a</code></a>
README: Suggest <code>user.email</code> to be
`41898282+github-actions[bot]<a
href="https://github.com/users"><code>@​users</code></a>.norepl...</li>
<li><a
href="8459bc0c7e"><code>8459bc0</code></a>
Bump actions/upload-artifact from 2 to 4 (<a
href="https://redirect.github.com/actions/checkout/issues/1695">#1695</a>)</li>
<li><a
href="3f603f6d5e"><code>3f603f6</code></a>
Bump actions/setup-node from 1 to 4 (<a
href="https://redirect.github.com/actions/checkout/issues/1696">#1696</a>)</li>
<li><a
href="fd084cde18"><code>fd084cd</code></a>
Bump github/codeql-action from 2 to 3 (<a
href="https://redirect.github.com/actions/checkout/issues/1694">#1694</a>)</li>
<li><a
href="9c1e94e0ad"><code>9c1e94e</code></a>
Update NPM dependencies (<a
href="https://redirect.github.com/actions/checkout/issues/1703">#1703</a>)</li>
<li>See full diff in <a
href="https://github.com/actions/checkout/compare/v4.1.4...v4.1.5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/checkout&package-manager=github_actions&previous-version=4.1.4&new-version=4.1.5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-08 10:35:08 +08:00
7a86b98f61 Migrate to a new PWD API (part 2) (#12749)
Refer to #12603 for part 1.

We need to be careful when migrating to the new API, because the new API
has slightly different semantics (PWD can contain symlinks). This PR
handles the "obviously safe" part of the migrations. Namely, it handles
two specific use cases:

* Passing PWD into `canonicalize_with()`
* Passing PWD into `EngineState::merge_env()`

The first case is safe because symlinks are canonicalized away. The
second case is safe because `EngineState::merge_env()` only uses PWD to
call `std::env::set_current_dir()`, which shouldn't affact Nushell. The
commit message contains detailed stats on the updated files.

Because these migrations touch a lot of files, I want to keep these PRs
small to avoid merge conflicts.
2024-05-07 18:17:49 +03:00
b9331d1b08 Add sys users command (#12787)
# Description
Add a new `sys users` command which returns a table of the users of the
system. This is the same table that is currently present as
`(sys).host.sessions`. The same table has been removed from the recently
added `sys host` command.

# User-Facing Changes
Adds a new command. (The old `sys` command is left as is.)
2024-05-07 07:52:02 -05:00
c54d223ea0 Fix list spread syntax highlighting (#12793)
# Description
I broke syntax highlighting for list spreads in #12529. This should fix
#12792 😅. I just copied the code for highlighting record
spreads.
2024-05-07 13:41:47 +08:00
eccc558a4e describe refactor (#12770)
# Description

Refactors `describe` a bit. Namely, I added a `Description` enum to get
rid of `compact_primitive_description` and its awkward `Value` pattern
matching.
2024-05-06 23:20:46 +00:00
1038c64f80 Add sys subcommands (#12747)
# Description
Adds subcommands to `sys` corresponding to each column of the record
returned by `sys`. This is to alleviate the fact that `sys` now returns
a regular record, meaning that it must compute every column which might
take a noticeable amount of time. The subcommands, on the other hand,
only need to compute and return a subset of the data which should be
much faster. In fact, it should be as fast as before, since this is how
the lazy record worked (it would compute only each column as necessary).

I choose to add subcommands instead of having an optional cell-path
parameter on `sys`, since the cell-path parameter would:
- increase the code complexity (can access any value at any row or
nested column)
- prevents discovery with tab-completion
- hinders type checking and allows users to pass potentially invalid
columns

# User-Facing Changes
Deprecates `sys` in favor of the new `sys` subcommands.
2024-05-06 23:20:27 +00:00
68adc4657f Polars lazy refactor (#12669)
This moves to predominantly supporting only lazy dataframes for most
operations. It removes a lot of the type conversion between lazy and
eager dataframes based on what was inputted into the command.

For the most part the changes will mean:
* You will need to run `polars collect` after performing operations
* The into-lazy command has been removed as it is redundant.
* When opening files a lazy frame will be outputted by default if the
reader supports lazy frames

A list of individual command changes can be found
[here](https://hackmd.io/@nucore/Bk-3V-hW0)

---------

Co-authored-by: Ian Manske <ian.manske@pm.me>
2024-05-06 23:19:11 +00:00
97fc190cc5 allow raw string to be used inside subexpression, list, and closure (#12776)
# Description
Fixes: #12744

This pr is moving raw string lex logic into `lex_item` function, so we
can use raw string inside subexpression, list, closure.
```nushell
> [r#'abc'#]
╭───┬─────╮
│ 0 │ abc │
╰───┴─────╯
> (r#'abc'#)
abc
> do {r#'aa'#}
aa
```

# Tests + Formatting
Done

# After Submitting
NaN
2024-05-06 15:53:58 -05:00
f9d4fa2c40 Add SOPs for dealing with adding deps/crates (#12771)
Time to expand our developer documentation, as some of this is still
tribal knowledge or things could otherwise slip through the cracks and
are costly to fix later.
2024-05-06 22:14:00 +02:00
460a1c8f87 Allow ls works inside dir with [] brackets (#12625)
# Description
Fixes: #12429

To fix the issue, we need to pass the `input pattern` itself to
`glob_from` function, but currently on latest main, nushell pass
`expanded path of input pattern` to `glob_from` function.
It causes globbing failed if expanded path includes `[]` brackets.

It's a pity that I have to duplicate `nu_engine::glob_from` function
into `ls`, because `ls` might convert from `NuGlob::NotExpand` to
`NuGlob::Expand`, in that case, `nu_engine::glob_from` won't work if
user want to ls for a directory which includes tilde:
```
mkdir "~abc"
ls "~abc"
```
So I need to duplicate `glob_from` function and pass original
`expand_tilde` information.

# User-Facing Changes
Nan

# Tests + Formatting
Done

# After Submitting
Nan
2024-05-06 14:01:32 +08:00
e879d4ecaf ListStream touchup (#12524)
# Description

Does some misc changes to `ListStream`:
- Moves it into its own module/file separate from `RawStream`.
- `ListStream`s now have an associated `Span`.
- This required changes to `ListStreamInfo` in `nu-plugin`. Note sure if
this is a breaking change for the plugin protocol.
- Hides the internals of `ListStream` but also adds a few more methods.
- This includes two functions to more easily alter a stream (these take
a `ListStream` and return a `ListStream` instead of having to go through
the whole `into_pipeline_data(..)` route).
  -  `map`: takes a `FnMut(Value) -> Value`
  - `modify`: takes a function to modify the inner stream.
2024-05-05 16:00:59 +00:00
3143ded374 Tango migration (#12469)
# Description

This PR migrates the benchmark suit to Tango. Its different compared to
other framework because it require 2 binaries, to run to do A/B
benchmarking, this is currently limited to Linux, Max, (Windows require
rustc nightly flag), by switching between two suits it can reduce noise
and run the code "almost" concurrently. I have have been in contact with
the maintainer, and bases this on the dev branch, as it had a newer API
simular to criterion. This framework compared to Divan also have a
simple file dump system if we want to generate graphs, do other analysis
on later. I think overall this crate is very nice, a lot faster to
compile and run then criterion, that's for sure.
2024-05-05 15:53:48 +00:00
ce3bc470ba improve NUON documentation (#12717)
# Description
this PR
- moves the documentation from `lib.rs` to `README.md` while still
including it in the lib file, so that both the [crates.io
page](https://crates.io/crates/nuon) and the
[documentation](https://docs.rs/nuon/latest/nuon/) show the top-level
doc
- mention that comments are allowed in NUON
- add a JSON-NUON example
- put back the formatting of NOTE blocks in the doc

# User-Facing Changes

# Tests + Formatting

# After Submitting
2024-05-05 15:34:22 +02:00
2f8e397365 Refactor flattening to reduce intermediate allocations (#12756)
# Description
Our current flattening code creates a bunch of intermediate `Vec`s for
each function call. These intermediate `Vec`s are then usually appended
to the current `output` `Vec`. By instead passing a mutable reference of
the `output` `Vec` to each flattening function, this `Vec` can be
reused/appended to directly thereby eliminating the need for
intermediate `Vec`s in most cases.
2024-05-05 10:43:20 +02:00
9181fca859 Update interprocess to 2.0.1 (#12769)
Fixes #12755

See https://github.com/kotauskas/interprocess/issues/63 and
https://github.com/kotauskas/interprocess/pull/62
2024-05-05 00:51:08 +02:00
0bfbe8c372 Specify the required minimum chrono version (#12766)
See #12765 and h/t to @FMOtalleb in
https://discord.com/channels/601130461678272522/855947301380947968/1236286905843454032

`Duration/TimeDelta::try_milliseconds` was added in `0.4.34`
https://github.com/chronotope/chrono/releases/tag/v0.4.34
2024-05-04 20:16:20 +02:00
349d02ced0 Pin base64 to the fixed patch version (#12762)
Followup to #12757

Always ensure that the `Cargo.toml` specifies the full minimum version
required to have the correct behavior or used features. Otherwise a
missing semver specifier is equal to `0` and could downgrade.
2024-05-04 17:16:40 +02:00
8eefb7313e Minimize future false positive typos (#12751)
# Description

Make typos config more strict: ignore false positives where they occur.

1. Ignore only files with typos
2. Add regexp-s with context
3. Ignore variable names only in Rust code
4. Ignore only 1 "identifier"
5. Check dot files

🎁 Extra bonus: fix typos!!
2024-05-04 15:00:44 +00:00
3ae6fe2114 Enable columns with spaces for into_sqlite by adding quotes to column names (#12759)
# Description
Spaces were causing an issue with into_sqlite when they appeared in
column names.

This is because the column names were not properly wrapped with
backticks that allow sqlite to properly interpret the column.

The issue has been addressed by adding backticks to the column names of
into sqlite. The output of the column names when using open is
unchanged, and the column names appear without backticks as expected.

fixes #12700 

# User-Facing Changes
N/A

# Tests + Formatting
Formatting has been respected.

Repro steps from the issue have been done, and ran multiple times. New
values get added to the correct columns as expected.
2024-05-04 08:12:44 -05:00
1e71cd4777 Bump base64 to 0.22.1 (#12757)
# Description
Bumps `base64` to 0.22.1 which fixes the alphabet used for binhex
encoding and decoding. This required updating some test expected output.

Related to PR #12469 where `base64` was also bumped and ran into the
failing tests.

# User-Facing Changes
Bug fix, but still changes binhex encoding and decoding output.

# Tests + Formatting
Updated test expected output.
2024-05-04 15:56:16 +03:00
0d6fbdde4a Fix PWD cannot point to root paths (#12761)
PR https://github.com/nushell/nushell/pull/12603 made it so that PWD can
never contain a trailing slash. However, a root path (such as `/` or
`C:\`) technically counts as "having a trailing slash", so now `cd /`
doesn't work.

I feel dumb for missing such an obvious edge case. Let's just merge this
quickly before anyone else finds out...

EDIT: It appears I'm too late.
2024-05-04 13:05:54 +03:00
709b2479d9 Fix trailing slash in PWD set by cd (#12760)
# Description

Fixes #12758.

#12662 introduced a bug where calling `cd` with a path with a trailing
slash would cause `PWD` to be set to a path including a trailing slash,
which is not allowed. This adds a helper to `nu_path` to remove this,
and uses it in the `cd` command to clean it up before setting `PWD`.

# Tests + Formatting
I added some tests to make sure we don't regress on this in the future.

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
2024-05-04 12:38:37 +03:00
35a0f7a369 fix: prevent relative directory traversal from crashing (#12438)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

- fixes #11922
- fixes #12203

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
This is a rewrite for some parts of the recursive completion system. The
Rust `std::path` structures often ignores things like a trailing `.`
because for a complete path, it implies the current directory. We are
replacing the use of some of these structs for Strings.

A side effect is the slashes being normalized in Windows. For example if
we were to type `foo/bar/b`, it would complete it to `foo\bar\baz`
because a backward slash is the main separator in windows.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

Relative paths are preserved. `..`s in the paths won't eagerly show
completions from the parent path. For example, `asd/foo/../b` will now
complete to `asd/foo/../bar` instead of `asd/bar`.

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-05-03 20:17:50 -05:00
a1287f7b3f add more tests to the polars plugin (#12719)
# Description

I added some more tests to our mighty `polars` ~~, yet I don't know how
to add expected results in some of them. I would like to ask for help.~~

~~My experiments are in the last commit: [polars:
experiments](f7e5e72019).
Without those experiments `cargo test` goes well.~~
 
UPD. I moved out my unsuccessful test experiments into a separate
[branch](https://github.com/maxim-uvarov/nushell/blob/polars-tests-broken2/).
So, this branch seems ready for a merge.

@ayax79, maybe you'll find time for me please? It's not urgent for sure.

P.S. I'm very new to git. Please feel free to give me any suggestions on
how I should use it better
2024-05-03 20:14:55 -05:00
406df7f208 Avoid taking unnecessary ownership of intermediates (#12740)
# Description

Judiciously try to avoid allocations/clone by changing the signature of
functions

- **Don't pass str by value unnecessarily if only read**
- **Don't require a vec in `Sandbox::with_files`**
- **Remove unnecessary string clone**
- **Fixup unnecessary borrow**
- **Use `&str` in shape color instead**
- **Vec -> Slice**
- **Elide string clone**
- **Elide `Path` clone**
- **Take &str to elide clone in tests**

# User-Facing Changes
None

# Tests + Formatting
This touches many tests purely in changing from owned to borrowed/static
data
2024-05-04 00:53:15 +00:00
e6f473695c Fix typo (#12752) 2024-05-03 16:14:13 -05:00
eff7f33086 Report errors that occur on file operations in ls (#12033)
Currently errors just create empty entries inside of resulting
dataframes.

This changeset is meant to help debug #12004, though generally speaking
I do think it's worth having ways to make errors be visible in this kind
of pipeline be visible

An example of what this looks like

<img width="954" alt="image"
src="https://github.com/nushell/nushell/assets/1408472/2c3c9167-2aaf-4f87-bab5-e8302d7a1170">
2024-05-03 10:12:43 -05:00
bdb6daa4b5 Migrate to a new PWD API (#12603)
This is the first PR towards migrating to a new `$env.PWD` API that
returns potentially un-canonicalized paths. Refer to PR #12515 for
motivations.

## New API: `EngineState::cwd()`

The goal of the new API is to cover both parse-time and runtime use
case, and avoid unintentional misuse. It takes an `Option<Stack>` as
argument, which if supplied, will search for `$env.PWD` on the stack in
additional to the engine state. I think with this design, there's less
confusion over parse-time and runtime environments. If you have access
to a stack, just supply it; otherwise supply `None`.

## Deprecation of other PWD-related APIs

Other APIs are re-implemented using `EngineState::cwd()` and properly
documented. They're marked deprecated, but their behavior is unchanged.
Unused APIs are deleted, and code that accesses `$env.PWD` directly
without using an API is rewritten.

Deprecated APIs:

* `EngineState::current_work_dir()`
* `StateWorkingSet::get_cwd()`
* `env::current_dir()`
* `env::current_dir_str()`
* `env::current_dir_const()`
* `env::current_dir_str_const()`

Other changes:

* `EngineState::get_cwd()` (deleted)
* `StateWorkingSet::list_env()` (deleted)
* `repl::do_run_cmd()` (rewritten with `env::current_dir_str()`)

## `cd` and `pwd` now use logical paths by default

This pulls the changes from PR #12515. It's currently somewhat broken
because using non-canonicalized paths exposed a bug in our path
normalization logic (Issue #12602). Once that is fixed, this should
work.

## Future plans

This PR needs some tests. Which test helpers should I use, and where
should I put those tests?

I noticed that unquoted paths are expanded within `eval_filepath()` and
`eval_directory()` before they even reach the `cd` command. This means
every paths is expanded twice. Is this intended?

Once this PR lands, the plan is to review all usages of the deprecated
APIs and migrate them to `EngineState::cwd()`. In the meantime, these
usages are annotated with `#[allow(deprecated)]` to avoid breaking CI.

---------

Co-authored-by: Jakub Žádník <kubouch@gmail.com>
2024-05-03 14:33:09 +03:00
f32ecc641f Remove some macros (#12742)
# Description
Replaces some macros with regular functions or other code.
2024-05-03 10:35:37 +02:00
eff2f1b3b0 Update PLATFORM_SUPPORT regarding feature flags (#12741)
This was out of date after removing `extra` and moving towards the
polars plugin
2024-05-03 08:49:44 +02:00
72f3942c37 Upgrade to interprocess 2.0.0 (#12729)
# Description

This fixes #12724. NetBSD confirmed to work with this change.

The update also behaves a bit better in some ways - it automatically
unlinks and reclaims sockets on Unix, and doesn't try to flush/sync the
socket on Windows, so I was able to remove that platform-specific logic.

They also have a way to split the socket so I could just use one socket
now, but I haven't tried to do that yet. That would be more of a
breaking change but I think it's more straightforward.

# User-Facing Changes

- Hopefully more platforms work

# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
2024-05-02 22:31:33 -07:00
bc6d934fa1 add support for cell-paths to NUON (#12718)
# Description
_cell paths_ can be easily serialized back and forth to NUON with the
leading `$.` syntax.

# User-Facing Changes
```nushell
$.foo.bar.0 | to nuon
```
and
```nushell
"$.foo.bar.0" | from nuon
```
are now possible

# Tests + Formatting
a new `cell_path` test has been added to `nuon`

# After Submitting
2024-05-03 09:25:19 +08:00
944ebac1c2 Eliminate dead code in nu-explore (#12735)
# Description
Nightly clippy found some unused fields leading me down a rabbit hole of
dead code hidden behind `pub`

Generally removing any already dead code or premature configurability
that is not exposed to the user.

# User-Facing Changes

None in effect.

Removed some options from the `$env.config.explore.hex-dump` record that
were only read into a struct but never used and also not validated.
2024-05-03 08:36:58 +08:00
847646e44e Remove lazy records (#12682)
# Description
Removes lazy records from the language, following from the reasons
outlined in #12622. Namely, this should make semantics more clear and
will eliminate concerns regarding maintainability.

# User-Facing Changes
- Breaking change: `lazy make` is removed.
- Breaking change: `describe --collect-lazyrecords` flag is removed.
- `sys` and `debug info` now return regular records.

# After Submitting
- Update nushell book if necessary.
- Explore new `sys` and `debug info` APIs to prevent them from taking
too long (e.g., subcommands or taking an optional column/cell-path
argument).
2024-05-03 08:36:10 +08:00
ad6deadf24 Flush on every plugin Data message (#12728)
# Description

This helps to ensure data produced on a stream is immediately available
to the consumer of the stream. The BufWriter introduced for performance
reasons in 0.93 exposed the behavior that data messages wouldn't make it
to the other side until they filled the buffer in @cablehead's
[`nu_plugin_from_sse`](https://github.com/cablehead/nu_plugin_from_sse).

I had originally not flushed on every `Data` message because I figured
that it isn't really critical that the other side sees those messages
immediately, since they're not used for control and they are flushed
when waiting for acknowledgement or when the buffer is too full anyway.

Increasing the amount of data that can be sent with a single underlying
write increases performance, but this interferes with some plugins that
want to use streams in a more real-time way. In the future I would like
to make this configurable, maybe even per-command, so that a command can
decide what the priority is. But for now I think this is reasonable.

In the worst case, this decreases performance by about 40%, when sending
very small values (just numbers). But for larger values, this PR
actually increases performance by about 20%, because I've increased the
buffer size about 2x to 16,384 bytes. The previous value of 8,192 bytes
was too small to fit a full buffer coming from an external command, so
doubling it makes sense, and now a write of a buffer from an external
command can be done in exactly one write call, which I think makes
sense. I'm doing this at the same time because flushing each data
message would make it very likely that each individual data message from
an external stream would require exactly two writes rather than
approximately one (amortized).

Again, hopefully the tradeoff isn't too bad, and if it is I'll just make
it configurable.

# User-Facing Changes

- Performance of plugin streams will be a bit different
- Plugins that expect to send streams in real-time will work again

# Tests + Formatting

- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
2024-05-02 23:51:16 +00:00
be6137d136 Fix clippy::wrong_self_convention in polars plugin (#12737)
Expected `into_` for `fn(self) -> T`
2024-05-02 19:31:51 +02:00
b88d8726d0 Rework for new clippy lints (#12736)
- **Clippy lint `assigning_clones`**
- **Clippy lint `legacy_numeric_constants`**
- **`clippy::float_equality_without_abs`**
- **`nu-table`: clippy::zero_repeat_side_effects**

---------

Co-authored-by: Ian Manske <ian.manske@pm.me>
2024-05-02 19:29:03 +02:00
0805f1fd90 overhaul shell_integration to enable individual control over ansi escape sequences (#12629)
# Description

This PR overhauls the shell_integration system by allowing individual
control over which ansi escape sequences are used. As we continue to
broaden our support for more ansi escape sequences, we can't really have
an all-or-nothing strategy. Some ansi escapes cause problems in certain
operating systems or terminals. We should allow the user to choose which
escapes they want.

TODO:
* Gather feedback
* Should osc7, osc9_9 and osc633p be mutually exclusive?
* Is the naming convention for these settings too nerdy osc2, osc7, etc?

closes #11301

# User-Facing Changes
shell_integration is no longer a boolean value. This is what is
supported in the default_config.nu
```nushell
  shell_integration: {
    # osc2 abbreviates the path if in the home_dir, sets the tab/window title, shows the running command in the tab/window title
    osc2: true
    # osc7 is a way to communicate the path to the terminal, this is helpful for spawning new tabs in the same directory
    osc7: true
    # osc8 is also implemented as the deprecated setting ls.show_clickable_links, it shows clickable links in ls output if your terminal supports it
    osc8: true
    # osc9_9 is from ConEmu and is starting to get wider support. It's similar to osc7 in that it communicates the path to the terminal
    osc9_9: false
    # osc133 is several escapes invented by Final Term which include the supported ones below.
    # 133;A - Mark prompt start
    # 133;B - Mark prompt end
    # 133;C - Mark pre-execution
    # 133;D;exit - Mark execution finished with exit code
    # This is used to enable terminals to know where the prompt is, the command is, where the command finishes, and where the output of the command is
    osc133: true
    # osc633 is closely related to osc133 but only exists in visual studio code (vscode) and supports their shell integration features
    # 633;A - Mark prompt start
    # 633;B - Mark prompt end
    # 633;C - Mark pre-execution
    # 633;D;exit - Mark execution finished with exit code
    # 633;E - NOT IMPLEMENTED - Explicitly set the command line with an optional nonce
    # 633;P;Cwd=<path> - Mark the current working directory and communicate it to the terminal
    # and also helps with the run recent menu in vscode
    osc633: true
    # reset_application_mode is escape \x1b[?1l and was added to help ssh work better
    reset_application_mode: true
  }
```

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-05-02 09:56:50 -04:00
8ed0d84d6a add raw-string literal support (#9956)
# Description

This PR adds raw string support by using `r#` at the beginning of single
quoted strings and `#` at the end.

Notice that escapes do not process, even within single quotes,
parentheses don't mean anything, $variables don't mean anything. It's
just a string.
```nushell
❯ echo r#'one\ntwo (blah) ($var)'#
one\ntwo (blah) ($var)
```
Notice how they work without `echo` or `print` and how they work without
carriage returns.
```nushell
❯ r#'adsfa'#
adsfa
❯ r##"asdfa'@qpejq'##
asdfa'@qpejq
❯ r#'asdfasdfasf
∙ foqwejfqo@'23rfjqf'#
```
They also have a special configurable color in the repl. (use single
quotes though)

![image](https://github.com/nushell/nushell/assets/343840/8780e21d-de4c-45b3-9880-2425f5fe10ef)

They should work like rust raw literals and allow `r##`, `r###`,
`r####`, etc, to help with having one or many `#`'s in the middle of
your raw-string.

They should work with `let` as well.

```nushell
r#'some\nraw\nstring'# | str upcase
```

closes https://github.com/nushell/nushell/issues/5091
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A
clippy::needless_collect -A clippy::result_large_err` to check that
you're using the standard code style
- `cargo test --workspace` to check that all tests pass
- `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->

---------

Co-authored-by: WindSoilder <WindSoilder@outlook.com>
Co-authored-by: Ian Manske <ian.manske@pm.me>
2024-05-02 09:36:37 -04:00
b5741ef14b Remove accidentally committed file (#12734)
Remove a small (20kb) SQLite database that was accidentally added as
part of https://github.com/nushell/nushell/pull/12692
2024-05-02 06:14:36 -07:00
cc91e36cf8 Upgrade Nu to v0.93.0 for nightly and release workflow (#12721) 2024-05-02 11:11:53 +08:00
0a01e7c33e Bump rmp-serde from 1.2.0 to 1.3.0 (#12711)
Bumps [rmp-serde](https://github.com/3Hren/msgpack-rust) from 1.2.0 to
1.3.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="52de9be5a3"><code>52de9be</code></a>
Bump</li>
<li><a
href="c1b19aa3a8"><code>c1b19aa</code></a>
Update README</li>
<li><a
href="454e0c5e18"><code>454e0c5</code></a>
Smaller integer for depth</li>
<li><a
href="143897c5ab"><code>143897c</code></a>
Update README</li>
<li><a
href="7ddcd2ea4a"><code>7ddcd2e</code></a>
Decoder/inspector example</li>
<li><a
href="f9f02d8397"><code>f9f02d8</code></a>
Simplify Marker match by reusing discriminant</li>
<li><a
href="926682d1d6"><code>926682d</code></a>
Smaller write len</li>
<li><a
href="06414f584d"><code>06414f5</code></a>
Handle OOM when writing</li>
<li><a
href="80e00b3187"><code>80e00b3</code></a>
Bump</li>
<li><a
href="6dd81ee985"><code>6dd81ee</code></a>
Hack to use bytes</li>
<li>Additional commits viewable in <a
href="https://github.com/3Hren/msgpack-rust/compare/rmp-serde/v1.2.0...rmp-serde/v1.3.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=rmp-serde&package-manager=cargo&previous-version=1.2.0&new-version=1.3.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-01 17:26:00 -07:00
ac882d7013 Add toolkit release-pkg windows for Windows release pkg builds (#12727)
# Description

We have often had issues with winget during release, and have to fix the
Windows installers and then create new packages.

This runs `release-pkg.nu` for all eight Windows packages we release:
std and full, aarch64 and x86_64, and zip and msi variants.

It requires the cross compiling toolchain for MSVC to be installed,
since Rust generally needs that to build things on Windows. Use the
Visual Studio Installer to do so.

If there's ever a need, this can be extended for other platforms too.
2024-05-01 16:51:44 -07:00
3d340657b5 explore: adopt anyhow, support CustomValue, remove help system (#12692)
This PR:
1. Adds basic support for `CustomValue` to `explore`. Previously `open
foo.db | explore` didn't really work, now we "materialize" the whole
database to a `Value` before loading it
2. Adopts `anyhow` for error handling in `explore`. Previously we were
kind of rolling our own version of `anyhow` by shoving all errors into a
`std::io::Error`; I think this is much nicer. This was necessary because
as part of 1), collecting input is now fallible...
3. Removes a lot of `explore`'s fancy command help system.
- Previously each command (`:help`, `:try`, etc.) had a sophisticated
help system with examples etc... but this was not very visible to users.
You had to know to run `:help :try` or view a list of commands with
`:help :`
- As discussed previously, we eventually want to move to a less modal
approach for `explore`, without the Vim-like commands. And so I don't
think it's worth keeping this command help system around (it's
intertwined with other stuff, and making these changes would have been
harder if keeping it).
4. Rename the `--reverse` flag to `--tail`. The flag scrolls to the end
of the data, which IMO is described better by "tail"
5. Does some renaming+commenting to clear up things I found difficult to
understand when navigating the `explore` code


I initially thought 1) would be just a few lines, and then this PR blew
up into much more extensive changes 😅


## Before
The whole database was being displayed as a single Nuon/JSON line 🤔 

![image](https://github.com/nushell/nushell/assets/26268125/6383f43b-fdff-48b4-9604-398438ad1499)


## After
The database gets displayed like a record

![image](https://github.com/nushell/nushell/assets/26268125/2f00ed7b-a3c4-47f4-a08c-98d07efc7bb4)


## Future work

It is sort of annoying that we have to load a whole SQLite database into
memory to make this work; it will be impractical for large databases.
I'd like to explore improvements to `CustomValue` that can make this
work more efficiently.
2024-05-01 17:34:37 -05:00
bc18cc12d5 change wix install method from perMachine to perUser (#12720)
# Description

This PR:
* Updates to the latest cargo-wix
* Changes install method from perMachine to perUser
* Updates HKCU Path vs HKLM Path
* Updates Windows Terminal Fragment Json to be compatible with [their
spec](https://learn.microsoft.com/en-us/windows/terminal/json-fragment-extensions).
* Updates License year from 2022 to 2024

The result of these changes makes our Windows installer no longer prompt
with a UAC dialog and installs the binaries into
`%LocalAppData%\Programs\nu\bin`.

All of this is an attempt to make WinGet releases less error prone.

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-05-01 17:31:16 -05:00
2970d48d41 Make bytes build accept integer values as individual bytes (#12685)
# Description
This creates an option for building binary data from byte integers.
Previously I think you could only do this by formatting the integers to
hex and using `decode hex`.

One potentially confusing thing is that this is different from the `into
binary` behavior. But since this doesn't support any of the other `into
binary` behaviors, it might be okay.

# User-Facing Changes
- `bytes build` accepts single byte arguments as integers

# Tests + Formatting
Example added.

# After Submitting
- [ ] release notes
2024-05-01 17:29:33 -05:00
f184a77fe1 Path expansion no longer removes trailing slashes (#12662)
This PR changes `nu_path::expand_path_with()` to no longer remove
trailing slashes. It also fixes bugs in the current implementation due
to ineffective tests (Fixes #12602).
2024-05-01 17:28:54 -05:00
b22d131279 Prevent each from swallowing errors when eval_block returns a ListStream (#12412)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx

you can also mention related issues, PRs or discussions!
-->

# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.

Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->

Prior, it seemed that nested errors would not get detected and shown.
This PR fixes that.

Resolves #10176:
```
~/CodingProjects/nushell> [[1,2]] | each {|x| $x | each {|y| error make {msg: "oh noes"} } }                        05/04/2024 21:34:08
Error: nu:🐚:eval_block_with_input

  × Eval block failed with pipeline input
   ╭─[entry #1:1:3]
 1 │ [[1,2]] | each {|x| $x | each {|y| error make {msg: "oh noes"} } }
   ·   ┬
   ·   ╰── source value
   ╰────

Error:   × oh noes
   ╭─[entry #1:1:36]
 1 │ [[1,2]] | each {|x| $x | each {|y| error make {msg: "oh noes"} } }
   ·                                    ─────┬────
   ·                                         ╰── originates from here
   ╰────
```

Resolves #11224:
```
~/CodingProjects/nushell> [0] | each { |_|                                                                          05/04/2024 21:35:40
:::     [0] | each { |_|
:::         non-existent-command
:::     }
::: }
Error: nu:🐚:eval_block_with_input

  × Eval block failed with pipeline input
   ╭─[entry #1:2:6]
 1 │ [0] | each { |_|
 2 │     [0] | each { |_|
   ·      ┬
   ·      ╰── source value
 3 │         non-existent-command
   ╰────

Error: nu:🐚:external_command

  × External command failed
   ╭─[entry #1:3:9]
 2 │     [0] | each { |_|
 3 │         non-existent-command
   ·         ──────────┬─────────
   ·                   ╰── executable was not found
 4 │     }
   ╰────
  help: No such file or directory (os error 2)
```

# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->

# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.

Make sure you've run and fixed any issues with these commands:

- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library

> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->

# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-05-01 17:24:54 -05:00
52d99cc60c Change environment variables to be case-preserving (#12701)
This PR changes `$env` to be **case-preserving** instead of
case-sensitive. That is, it preserves the case of the environment
variable when it is first assigned, but subsequent retrieval and update
ignores the case.

Notably, both `$env.PATH` and `$env.Path` can now be used to read or set
the environment variable, but child processes will always see the
correct case based on the platform.

Fixes #11268.

---

This feature was surprising simple to implement, because most of the
infrastructure to support case-insensitive cell path access already
exists. The `get` command extracts data using a cell path in a
case-insensitive way (!), but accepts a `--sensitive` flag. (I think
this should be flipped around?)
2024-05-01 17:22:34 -05:00
21ebdfe8d7 Bump version to 0.93.1 (#12710)
# Description

Next patch/dev release, `0.93.1`
2024-05-01 17:19:20 -05:00
cb6d495e02 Bump actions/checkout from 4.1.3 to 4.1.4 (#12712)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.1.3
to 4.1.4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/releases">actions/checkout's
releases</a>.</em></p>
<blockquote>
<h2>v4.1.4</h2>
<h2>What's Changed</h2>
<ul>
<li>Disable <code>extensions.worktreeConfig</code> when disabling
<code>sparse-checkout</code> by <a
href="https://github.com/jww3"><code>@​jww3</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1692">actions/checkout#1692</a></li>
<li>Add dependabot config by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1688">actions/checkout#1688</a></li>
<li>Bump word-wrap from 1.2.3 to 1.2.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1643">actions/checkout#1643</a></li>
<li>Bump the minor-actions-dependencies group with 2 updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1693">actions/checkout#1693</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v4.1.3...v4.1.4">https://github.com/actions/checkout/compare/v4.1.3...v4.1.4</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/blob/main/CHANGELOG.md">actions/checkout's
changelog</a>.</em></p>
<blockquote>
<h2>v4.1.4</h2>
<ul>
<li>Disable <code>extensions.worktreeConfig</code> when disabling
<code>sparse-checkout</code> by <a
href="https://github.com/jww3"><code>@​jww3</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1692">actions/checkout#1692</a></li>
<li>Add dependabot config by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1688">actions/checkout#1688</a></li>
<li>Bump the minor-actions-dependencies group with 2 updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1693">actions/checkout#1693</a></li>
<li>Bump word-wrap from 1.2.3 to 1.2.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1643">actions/checkout#1643</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="0ad4b8fada"><code>0ad4b8f</code></a>
Prep Release v4.1.4 (<a
href="https://redirect.github.com/actions/checkout/issues/1704">#1704</a>)</li>
<li><a
href="43045ae669"><code>43045ae</code></a>
Disable <code>extensions.worktreeConfig</code> when disabling
<code>sparse-checkout</code> (<a
href="https://redirect.github.com/actions/checkout/issues/1692">#1692</a>)</li>
<li><a
href="37b082107b"><code>37b0821</code></a>
Bump the minor-actions-dependencies group with 2 updates (<a
href="https://redirect.github.com/actions/checkout/issues/1693">#1693</a>)</li>
<li><a
href="9839dc14a0"><code>9839dc1</code></a>
Add dependabot config (<a
href="https://redirect.github.com/actions/checkout/issues/1688">#1688</a>)</li>
<li><a
href="9b4c13b0bf"><code>9b4c13b</code></a>
Bump word-wrap from 1.2.3 to 1.2.5 (<a
href="https://redirect.github.com/actions/checkout/issues/1643">#1643</a>)</li>
<li>See full diff in <a
href="https://github.com/actions/checkout/compare/v4.1.3...v4.1.4">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/checkout&package-manager=github_actions&previous-version=4.1.3&new-version=4.1.4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-01 11:13:32 +08:00
3c022e334f Fix Windows Terminal profile installation (#12714)
# Description

The command used to edit the Windows Terminal profiling was failing due
to the `open file | save -f file` metadata safeguard introduced in this
version. With this, the installer is now fixed again.
2024-04-30 17:36:40 -07:00
734a3b5f2c Bump crate-ci/typos from 1.20.10 to 1.21.0 (#12713) 2024-05-01 00:32:01 +00:00
872 changed files with 25050 additions and 37866 deletions

View File

@ -18,6 +18,14 @@ updates:
ignore:
- dependency-name: "*"
update-types: ["version-update:semver-patch"]
groups:
# Only update polars as a whole as there are many subcrates that need to
# be updated at once. We explicitly depend on some of them, so batch their
# updates to not take up dependabot PR slots with dysfunctional PRs
polars:
patterns:
- "polars"
- "polars-*"
- package-ecosystem: "github-actions"
directory: "/"
schedule:

View File

@ -26,7 +26,7 @@ Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make sure to [enable developer mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use std testing; testing run-tests --path crates/nu-std"` to run the tests for the standard library
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows

View File

@ -19,7 +19,7 @@ jobs:
# Prevent sudden announcement of a new advisory from failing ci:
continue-on-error: true
steps:
- uses: actions/checkout@v4.1.3
- uses: actions/checkout@v4.1.7
- uses: rustsec/audit-check@v1.4.1
with:
token: ${{ secrets.GITHUB_TOKEN }}

View File

@ -29,76 +29,50 @@ jobs:
# instead of 14 GB) which is too little for us right now. Revisit when `dfr` commands are
# removed and we're only building the `polars` plugin instead
platform: [windows-latest, macos-13, ubuntu-20.04]
feature: [default, dataframe]
include:
- feature: default
flags: ""
- feature: dataframe
flags: "--features=dataframe"
exclude:
- platform: windows-latest
feature: dataframe
- platform: macos-13
feature: dataframe
runs-on: ${{ matrix.platform }}
steps:
- uses: actions/checkout@v4.1.3
- uses: actions/checkout@v4.1.7
- name: Setup Rust toolchain and cache
uses: actions-rust-lang/setup-rust-toolchain@v1.8.0
with:
rustflags: ""
uses: actions-rust-lang/setup-rust-toolchain@v1.9.0
- name: cargo fmt
run: cargo fmt --all -- --check
# If changing these settings also change toolkit.nu
- name: Clippy
run: cargo clippy --workspace ${{ matrix.flags }} --exclude nu_plugin_* -- $CLIPPY_OPTIONS
run: cargo clippy --workspace --exclude nu_plugin_* -- $CLIPPY_OPTIONS
# In tests we don't have to deny unwrap
- name: Clippy of tests
run: cargo clippy --tests --workspace ${{ matrix.flags }} --exclude nu_plugin_* -- -D warnings
run: cargo clippy --tests --workspace --exclude nu_plugin_* -- -D warnings
- name: Clippy of benchmarks
run: cargo clippy --benches --workspace ${{ matrix.flags }} --exclude nu_plugin_* -- -D warnings
run: cargo clippy --benches --workspace --exclude nu_plugin_* -- -D warnings
tests:
strategy:
fail-fast: true
matrix:
platform: [windows-latest, macos-latest, ubuntu-20.04]
feature: [default, dataframe]
include:
# linux CI cannot handle clipboard feature
- default-flags: ""
- platform: ubuntu-20.04
# linux CI cannot handle clipboard feature
- platform: ubuntu-20.04
default-flags: "--no-default-features --features=default-no-clipboard"
- feature: default
flags: ""
- feature: dataframe
flags: "--features=dataframe"
exclude:
- platform: windows-latest
feature: dataframe
- platform: macos-latest
feature: dataframe
runs-on: ${{ matrix.platform }}
steps:
- uses: actions/checkout@v4.1.3
- uses: actions/checkout@v4.1.7
- name: Setup Rust toolchain and cache
uses: actions-rust-lang/setup-rust-toolchain@v1.8.0
with:
rustflags: ""
uses: actions-rust-lang/setup-rust-toolchain@v1.9.0
- name: Tests
run: cargo test --workspace --profile ci --exclude nu_plugin_* ${{ matrix.default-flags }} ${{ matrix.flags }}
run: cargo test --workspace --profile ci --exclude nu_plugin_* ${{ matrix.default-flags }}
- name: Check for clean repo
shell: bash
run: |
@ -121,12 +95,10 @@ jobs:
runs-on: ${{ matrix.platform }}
steps:
- uses: actions/checkout@v4.1.3
- uses: actions/checkout@v4.1.7
- name: Setup Rust toolchain and cache
uses: actions-rust-lang/setup-rust-toolchain@v1.8.0
with:
rustflags: ""
uses: actions-rust-lang/setup-rust-toolchain@v1.9.0
- name: Install Nushell
run: cargo install --path . --locked --no-default-features
@ -168,18 +140,16 @@ jobs:
# Using macOS 13 runner because 14 is based on the M1 and has half as much RAM (7 GB,
# instead of 14 GB) which is too little for us right now.
#
# Failure occuring with clippy for rust 1.77.2
# Failure occurring with clippy for rust 1.77.2
platform: [windows-latest, macos-13, ubuntu-20.04]
runs-on: ${{ matrix.platform }}
steps:
- uses: actions/checkout@v4.1.3
- uses: actions/checkout@v4.1.7
- name: Setup Rust toolchain and cache
uses: actions-rust-lang/setup-rust-toolchain@v1.8.0
with:
rustflags: ""
uses: actions-rust-lang/setup-rust-toolchain@v1.9.0
- name: Clippy
run: cargo clippy --package nu_plugin_* -- $CLIPPY_OPTIONS

View File

@ -27,7 +27,7 @@ jobs:
# if: github.repository == 'nushell/nightly'
steps:
- name: Checkout
uses: actions/checkout@v4.1.3
uses: actions/checkout@v4.1.7
if: github.repository == 'nushell/nightly'
with:
ref: main
@ -36,10 +36,10 @@ jobs:
token: ${{ secrets.WORKFLOW_TOKEN }}
- name: Setup Nushell
uses: hustcer/setup-nu@v3.10
uses: hustcer/setup-nu@v3.11
if: github.repository == 'nushell/nightly'
with:
version: 0.91.0
version: 0.93.0
# Synchronize the main branch of nightly repo with the main branch of Nushell official repo
- name: Prepare for Nightly Release
@ -84,46 +84,35 @@ jobs:
include:
- target: aarch64-apple-darwin
os: macos-latest
target_rustflags: ''
- target: x86_64-apple-darwin
os: macos-latest
target_rustflags: ''
- target: x86_64-pc-windows-msvc
extra: 'bin'
os: windows-latest
target_rustflags: ''
- target: x86_64-pc-windows-msvc
extra: msi
os: windows-latest
target_rustflags: ''
- target: aarch64-pc-windows-msvc
extra: 'bin'
os: windows-latest
target_rustflags: ''
- target: aarch64-pc-windows-msvc
extra: msi
os: windows-latest
target_rustflags: ''
- target: x86_64-unknown-linux-gnu
os: ubuntu-20.04
target_rustflags: ''
- target: x86_64-unknown-linux-musl
os: ubuntu-20.04
target_rustflags: ''
- target: aarch64-unknown-linux-gnu
os: ubuntu-20.04
target_rustflags: ''
- target: armv7-unknown-linux-gnueabihf
os: ubuntu-20.04
target_rustflags: ''
- target: riscv64gc-unknown-linux-gnu
os: ubuntu-latest
target_rustflags: ''
runs-on: ${{matrix.os}}
steps:
- uses: actions/checkout@v4.1.3
- uses: actions/checkout@v4.1.7
with:
ref: main
fetch-depth: 0
@ -133,26 +122,24 @@ jobs:
echo "targets = ['${{matrix.target}}']" >> rust-toolchain.toml
- name: Setup Rust toolchain and cache
uses: actions-rust-lang/setup-rust-toolchain@v1.8.0
# WARN: Keep the rustflags to prevent from the winget submission error: `CAQuietExec: Error 0xc0000135`
uses: actions-rust-lang/setup-rust-toolchain@v1.9.0
# WARN: Keep the rustflags to prevent from the winget submission error: `CAQuietExec: Error 0xc0000135`
with:
rustflags: ''
- name: Setup Nushell
uses: hustcer/setup-nu@v3.10
uses: hustcer/setup-nu@v3.11
with:
version: 0.91.0
version: 0.93.0
- name: Release Nu Binary
id: nu
run: nu .github/workflows/release-pkg.nu
env:
RELEASE_TYPE: standard
OS: ${{ matrix.os }}
REF: ${{ github.ref }}
TARGET: ${{ matrix.target }}
_EXTRA_: ${{ matrix.extra }}
TARGET_RUSTFLAGS: ${{ matrix.target_rustflags }}
- name: Create an Issue for Release Failure
if: ${{ failure() }}
@ -174,7 +161,7 @@ jobs:
# REF: https://github.com/marketplace/actions/gh-release
# Create a release only in nushell/nightly repo
- name: Publish Archive
uses: softprops/action-gh-release@v2.0.4
uses: softprops/action-gh-release@v2.0.5
if: ${{ startsWith(github.repository, 'nushell/nightly') }}
with:
prerelease: true
@ -184,122 +171,6 @@ jobs:
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
full:
name: Full
needs: prepare
strategy:
fail-fast: false
matrix:
target:
- aarch64-apple-darwin
- x86_64-apple-darwin
- x86_64-pc-windows-msvc
- aarch64-pc-windows-msvc
- x86_64-unknown-linux-gnu
- x86_64-unknown-linux-musl
- aarch64-unknown-linux-gnu
extra: ['bin']
include:
- target: aarch64-apple-darwin
os: macos-latest
target_rustflags: '--features=dataframe'
- target: x86_64-apple-darwin
os: macos-latest
target_rustflags: '--features=dataframe'
- target: x86_64-pc-windows-msvc
extra: 'bin'
os: windows-latest
target_rustflags: '--features=dataframe'
- target: x86_64-pc-windows-msvc
extra: msi
os: windows-latest
target_rustflags: '--features=dataframe'
- target: aarch64-pc-windows-msvc
extra: 'bin'
os: windows-latest
target_rustflags: '--features=dataframe'
- target: aarch64-pc-windows-msvc
extra: msi
os: windows-latest
target_rustflags: '--features=dataframe'
- target: x86_64-unknown-linux-gnu
os: ubuntu-20.04
target_rustflags: '--features=dataframe'
- target: x86_64-unknown-linux-musl
os: ubuntu-20.04
target_rustflags: '--features=dataframe'
- target: aarch64-unknown-linux-gnu
os: ubuntu-20.04
target_rustflags: '--features=dataframe'
runs-on: ${{matrix.os}}
steps:
- uses: actions/checkout@v4.1.3
with:
ref: main
fetch-depth: 0
- name: Update Rust Toolchain Target
run: |
echo "targets = ['${{matrix.target}}']" >> rust-toolchain.toml
- name: Setup Rust toolchain and cache
uses: actions-rust-lang/setup-rust-toolchain@v1.8.0
# WARN: Keep the rustflags to prevent from the winget submission error: `CAQuietExec: Error 0xc0000135`
with:
rustflags: ''
- name: Setup Nushell
uses: hustcer/setup-nu@v3.10
with:
version: 0.91.0
- name: Release Nu Binary
id: nu
run: nu .github/workflows/release-pkg.nu
env:
RELEASE_TYPE: full
OS: ${{ matrix.os }}
REF: ${{ github.ref }}
TARGET: ${{ matrix.target }}
_EXTRA_: ${{ matrix.extra }}
TARGET_RUSTFLAGS: ${{ matrix.target_rustflags }}
- name: Create an Issue for Release Failure
if: ${{ failure() }}
uses: JasonEtco/create-an-issue@v2.9.2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
update_existing: true
search_existing: open
filename: .github/AUTO_ISSUE_TEMPLATE/nightly-build-fail.md
- name: Set Outputs of Short SHA
id: vars
run: |
echo "date=$(date -u +'%Y-%m-%d')" >> $GITHUB_OUTPUT
sha_short=$(git rev-parse --short HEAD)
echo "sha_short=${sha_short:0:7}" >> $GITHUB_OUTPUT
# REF: https://github.com/marketplace/actions/gh-release
# Create a release only in nushell/nightly repo
- name: Publish Archive
uses: softprops/action-gh-release@v2.0.4
if: ${{ startsWith(github.repository, 'nushell/nightly') }}
with:
draft: false
prerelease: true
name: Nu-nightly-${{ steps.vars.outputs.date }}-${{ steps.vars.outputs.sha_short }}
tag_name: nightly-${{ steps.vars.outputs.sha_short }}
body: |
This is a NIGHTLY build of Nushell.
It is NOT recommended for production use.
files: ${{ steps.nu.outputs.archive }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
cleanup:
name: Cleanup
# Should only run in nushell/nightly repo
@ -310,14 +181,14 @@ jobs:
- name: Waiting for Release
run: sleep 1800
- uses: actions/checkout@v4.1.3
- uses: actions/checkout@v4.1.7
with:
ref: main
- name: Setup Nushell
uses: hustcer/setup-nu@v3.10
uses: hustcer/setup-nu@v3.11
with:
version: 0.91.0
version: 0.93.0
# Keep the last a few releases
- name: Delete Older Releases

View File

@ -9,7 +9,6 @@
# Instructions for manually creating an MSI for Winget Releases when they fail
# Added 2022-11-29 when Windows packaging wouldn't work
# Updated again on 2023-02-23 because msis are still failing validation
# Update on 2023-10-18 to use RELEASE_TYPE env var to determine if full or not
# To run this manual for windows here are the steps I take
# checkout the release you want to publish
# 1. git checkout 0.86.0
@ -17,28 +16,26 @@
# 2. $env:CARGO_TARGET_DIR = ""
# 2. hide-env CARGO_TARGET_DIR
# 3. $env.TARGET = 'x86_64-pc-windows-msvc'
# 4. $env.TARGET_RUSTFLAGS = ''
# 5. $env.GITHUB_WORKSPACE = 'D:\nushell'
# 6. $env.GITHUB_OUTPUT = 'D:\nushell\output\out.txt'
# 7. $env.OS = 'windows-latest'
# 8. $env.RELEASE_TYPE = '' # There is full and '' for normal releases
# 4. $env.GITHUB_WORKSPACE = 'D:\nushell'
# 5. $env.GITHUB_OUTPUT = 'D:\nushell\output\out.txt'
# 6. $env.OS = 'windows-latest'
# make sure 7z.exe is in your path https://www.7-zip.org/download.html
# 9. $env.Path = ($env.Path | append 'c:\apps\7-zip')
# 7. $env.Path = ($env.Path | append 'c:\apps\7-zip')
# make sure aria2c.exe is in your path https://github.com/aria2/aria2
# 10. $env.Path = ($env.Path | append 'c:\path\to\aria2c')
# 8. $env.Path = ($env.Path | append 'c:\path\to\aria2c')
# make sure you have the wixtools installed https://wixtoolset.org/
# 11. $env.Path = ($env.Path | append 'C:\Users\dschroeder\AppData\Local\tauri\WixTools')
# 9. $env.Path = ($env.Path | append 'C:\Users\dschroeder\AppData\Local\tauri\WixTools')
# You need to run the release-pkg twice. The first pass, with _EXTRA_ as 'bin', makes the output
# folder and builds everything. The second pass, that generates the msi file, with _EXTRA_ as 'msi'
# 12. $env._EXTRA_ = 'bin'
# 13. source .github\workflows\release-pkg.nu
# 14. cd ..
# 15. $env._EXTRA_ = 'msi'
# 16. source .github\workflows\release-pkg.nu
# 10. $env._EXTRA_ = 'bin'
# 11. source .github\workflows\release-pkg.nu
# 12. cd ..
# 13. $env._EXTRA_ = 'msi'
# 14. source .github\workflows\release-pkg.nu
# After msi is generated, you have to update winget-pkgs repo, you'll need to patch the release
# by deleting the existing msi and uploading this new msi. Then you'll need to update the hash
# on the winget-pkgs PR. To generate the hash, run this command
# 17. open target\wix\nu-0.74.0-x86_64-pc-windows-msvc.msi | hash sha256
# 15. open target\wix\nu-0.74.0-x86_64-pc-windows-msvc.msi | hash sha256
# Then, just take the output and put it in the winget-pkgs PR for the hash on the msi
@ -48,31 +45,15 @@ let os = $env.OS
let target = $env.TARGET
# Repo source dir like `/home/runner/work/nushell/nushell`
let src = $env.GITHUB_WORKSPACE
let flags = $env.TARGET_RUSTFLAGS
let dist = $'($env.GITHUB_WORKSPACE)/output'
let version = (open Cargo.toml | get package.version)
print $'Debugging info:'
print { version: $version, bin: $bin, os: $os, releaseType: $env.RELEASE_TYPE, target: $target, src: $src, flags: $flags, dist: $dist }; hr-line -b
# Rename the full release name so that we won't break the existing scripts for standard release downloading, such as:
# curl -s https://api.github.com/repos/chmln/sd/releases/latest | grep browser_download_url | cut -d '"' -f 4 | grep x86_64-unknown-linux-musl
const FULL_RLS_NAMING = {
x86_64-apple-darwin: 'x86_64-darwin-full',
aarch64-apple-darwin: 'aarch64-darwin-full',
x86_64-unknown-linux-gnu: 'x86_64-linux-gnu-full',
x86_64-pc-windows-msvc: 'x86_64-windows-msvc-full',
x86_64-unknown-linux-musl: 'x86_64-linux-musl-full',
aarch64-unknown-linux-gnu: 'aarch64-linux-gnu-full',
aarch64-pc-windows-msvc: 'aarch64-windows-msvc-full',
riscv64gc-unknown-linux-gnu: 'riscv64-linux-gnu-full',
armv7-unknown-linux-gnueabihf: 'armv7-linux-gnueabihf-full',
}
print { version: $version, bin: $bin, os: $os, target: $target, src: $src, dist: $dist }; hr-line -b
# $env
let USE_UBUNTU = $os starts-with ubuntu
let FULL_NAME = $FULL_RLS_NAMING | get -i $target | default 'unknown-target-full'
print $'(char nl)Packaging ($bin) v($version) for ($target) in ($src)...'; hr-line -b
if not ('Cargo.lock' | path exists) { cargo generate-lockfile }
@ -91,23 +72,23 @@ if $os in ['macos-latest'] or $USE_UBUNTU {
'aarch64-unknown-linux-gnu' => {
sudo apt-get install gcc-aarch64-linux-gnu -y
$env.CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER = 'aarch64-linux-gnu-gcc'
cargo-build-nu $flags
cargo-build-nu
}
'riscv64gc-unknown-linux-gnu' => {
sudo apt-get install gcc-riscv64-linux-gnu -y
$env.CARGO_TARGET_RISCV64GC_UNKNOWN_LINUX_GNU_LINKER = 'riscv64-linux-gnu-gcc'
cargo-build-nu $flags
cargo-build-nu
}
'armv7-unknown-linux-gnueabihf' => {
sudo apt-get install pkg-config gcc-arm-linux-gnueabihf -y
$env.CARGO_TARGET_ARMV7_UNKNOWN_LINUX_GNUEABIHF_LINKER = 'arm-linux-gnueabihf-gcc'
cargo-build-nu $flags
cargo-build-nu
}
_ => {
# musl-tools to fix 'Failed to find tool. Is `musl-gcc` installed?'
# Actually just for x86_64-unknown-linux-musl target
if $USE_UBUNTU { sudo apt install musl-tools -y }
cargo-build-nu $flags
cargo-build-nu
}
}
}
@ -116,7 +97,7 @@ if $os in ['macos-latest'] or $USE_UBUNTU {
# Build for Windows without static-link-openssl feature
# ----------------------------------------------------------------------------
if $os in ['windows-latest'] {
cargo-build-nu $flags
cargo-build-nu
}
# ----------------------------------------------------------------------------
@ -162,7 +143,7 @@ cd $dist; print $'(char nl)Creating release archive...'; hr-line
if $os in ['macos-latest'] or $USE_UBUNTU {
let files = (ls | get name)
let dest = if $env.RELEASE_TYPE == 'full' { $'($bin)-($version)-($FULL_NAME)' } else { $'($bin)-($version)-($target)' }
let dest = $'($bin)-($version)-($target)'
let archive = $'($dist)/($dest).tar.gz'
mkdir $dest
@ -177,7 +158,7 @@ if $os in ['macos-latest'] or $USE_UBUNTU {
} else if $os == 'windows-latest' {
let releaseStem = if $env.RELEASE_TYPE == 'full' { $'($bin)-($version)-($FULL_NAME)' } else { $'($bin)-($version)-($target)' }
let releaseStem = $'($bin)-($version)-($target)'
print $'(char nl)Download less related stuffs...'; hr-line
aria2c https://github.com/jftuga/less-Windows/releases/download/less-v608/less.exe -o less.exe
@ -192,7 +173,7 @@ if $os in ['macos-latest'] or $USE_UBUNTU {
# Wix need the binaries be stored in target/release/
cp -r ($'($dist)/*' | into glob) target/release/
ls target/release/* | print
cargo install cargo-wix --version 0.3.4
cargo install cargo-wix --version 0.3.8
cargo wix --no-build --nocapture --package nu --output $wixRelease
# Workaround for https://github.com/softprops/action-gh-release/issues/280
let archive = ($wixRelease | str replace --all '\' '/')
@ -214,19 +195,11 @@ if $os in ['macos-latest'] or $USE_UBUNTU {
}
}
def 'cargo-build-nu' [ options: string ] {
if ($options | str trim | is-empty) {
if $os == 'windows-latest' {
cargo build --release --all --target $target
} else {
cargo build --release --all --target $target --features=static-link-openssl
}
def 'cargo-build-nu' [] {
if $os == 'windows-latest' {
cargo build --release --all --target $target
} else {
if $os == 'windows-latest' {
cargo build --release --all --target $target $options
} else {
cargo build --release --all --target $target --features=static-link-openssl $options
}
cargo build --release --all --target $target --features=static-link-openssl
}
}

View File

@ -34,167 +34,64 @@ jobs:
include:
- target: aarch64-apple-darwin
os: macos-latest
target_rustflags: ''
- target: x86_64-apple-darwin
os: macos-latest
target_rustflags: ''
- target: x86_64-pc-windows-msvc
extra: 'bin'
os: windows-latest
target_rustflags: ''
- target: x86_64-pc-windows-msvc
extra: msi
os: windows-latest
target_rustflags: ''
- target: aarch64-pc-windows-msvc
extra: 'bin'
os: windows-latest
target_rustflags: ''
- target: aarch64-pc-windows-msvc
extra: msi
os: windows-latest
target_rustflags: ''
- target: x86_64-unknown-linux-gnu
os: ubuntu-20.04
target_rustflags: ''
- target: x86_64-unknown-linux-musl
os: ubuntu-20.04
target_rustflags: ''
- target: aarch64-unknown-linux-gnu
os: ubuntu-20.04
target_rustflags: ''
- target: armv7-unknown-linux-gnueabihf
os: ubuntu-20.04
target_rustflags: ''
- target: riscv64gc-unknown-linux-gnu
os: ubuntu-latest
target_rustflags: ''
runs-on: ${{matrix.os}}
steps:
- uses: actions/checkout@v4.1.3
- uses: actions/checkout@v4.1.7
- name: Update Rust Toolchain Target
run: |
echo "targets = ['${{matrix.target}}']" >> rust-toolchain.toml
- name: Setup Rust toolchain
uses: actions-rust-lang/setup-rust-toolchain@v1.8.0
# WARN: Keep the rustflags to prevent from the winget submission error: `CAQuietExec: Error 0xc0000135`
uses: actions-rust-lang/setup-rust-toolchain@v1.9.0
# WARN: Keep the rustflags to prevent from the winget submission error: `CAQuietExec: Error 0xc0000135`
with:
cache: false
rustflags: ''
- name: Setup Nushell
uses: hustcer/setup-nu@v3.10
uses: hustcer/setup-nu@v3.11
with:
version: 0.91.0
version: 0.93.0
- name: Release Nu Binary
id: nu
run: nu .github/workflows/release-pkg.nu
env:
RELEASE_TYPE: standard
OS: ${{ matrix.os }}
REF: ${{ github.ref }}
TARGET: ${{ matrix.target }}
_EXTRA_: ${{ matrix.extra }}
TARGET_RUSTFLAGS: ${{ matrix.target_rustflags }}
# REF: https://github.com/marketplace/actions/gh-release
- name: Publish Archive
uses: softprops/action-gh-release@v2.0.4
if: ${{ startsWith(github.ref, 'refs/tags/') }}
with:
draft: true
files: ${{ steps.nu.outputs.archive }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
full:
name: Full
strategy:
fail-fast: false
matrix:
target:
- aarch64-apple-darwin
- x86_64-apple-darwin
- x86_64-pc-windows-msvc
- aarch64-pc-windows-msvc
- x86_64-unknown-linux-gnu
- x86_64-unknown-linux-musl
- aarch64-unknown-linux-gnu
extra: ['bin']
include:
- target: aarch64-apple-darwin
os: macos-latest
target_rustflags: '--features=dataframe'
- target: x86_64-apple-darwin
os: macos-latest
target_rustflags: '--features=dataframe'
- target: x86_64-pc-windows-msvc
extra: 'bin'
os: windows-latest
target_rustflags: '--features=dataframe'
- target: x86_64-pc-windows-msvc
extra: msi
os: windows-latest
target_rustflags: '--features=dataframe'
- target: aarch64-pc-windows-msvc
extra: 'bin'
os: windows-latest
target_rustflags: '--features=dataframe'
- target: aarch64-pc-windows-msvc
extra: msi
os: windows-latest
target_rustflags: '--features=dataframe'
- target: x86_64-unknown-linux-gnu
os: ubuntu-20.04
target_rustflags: '--features=dataframe'
- target: x86_64-unknown-linux-musl
os: ubuntu-20.04
target_rustflags: '--features=dataframe'
- target: aarch64-unknown-linux-gnu
os: ubuntu-20.04
target_rustflags: '--features=dataframe'
runs-on: ${{matrix.os}}
steps:
- uses: actions/checkout@v4.1.3
- name: Update Rust Toolchain Target
run: |
echo "targets = ['${{matrix.target}}']" >> rust-toolchain.toml
- name: Setup Rust toolchain
uses: actions-rust-lang/setup-rust-toolchain@v1.8.0
# WARN: Keep the rustflags to prevent from the winget submission error: `CAQuietExec: Error 0xc0000135`
with:
cache: false
rustflags: ''
- name: Setup Nushell
uses: hustcer/setup-nu@v3.10
with:
version: 0.91.0
- name: Release Nu Binary
id: nu
run: nu .github/workflows/release-pkg.nu
env:
RELEASE_TYPE: full
OS: ${{ matrix.os }}
REF: ${{ github.ref }}
TARGET: ${{ matrix.target }}
_EXTRA_: ${{ matrix.extra }}
TARGET_RUSTFLAGS: ${{ matrix.target_rustflags }}
# REF: https://github.com/marketplace/actions/gh-release
- name: Publish Archive
uses: softprops/action-gh-release@v2.0.4
uses: softprops/action-gh-release@v2.0.5
if: ${{ startsWith(github.ref, 'refs/tags/') }}
with:
draft: true

View File

@ -7,7 +7,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout Actions Repository
uses: actions/checkout@v4.1.3
uses: actions/checkout@v4.1.7
- name: Check spelling
uses: crate-ci/typos@v1.20.10
uses: crate-ci/typos@v1.22.7

26
CITATION.cff Normal file
View File

@ -0,0 +1,26 @@
cff-version: 1.2.0
title: 'Nushell'
message: >-
If you use this software and wish to cite it,
you can use the metadata from this file.
type: software
authors:
- name: "The Nushell Project Team"
identifiers:
- type: url
value: 'https://github.com/nushell/nushell'
description: Repository
repository-code: 'https://github.com/nushell/nushell'
url: 'https://www.nushell.sh/'
abstract: >-
The goal of the Nushell project is to take the Unix
philosophy of shells, where pipes connect simple commands
together, and bring it to the modern style of development.
Thus, rather than being either a shell, or a programming
language, Nushell connects both by bringing a rich
programming language and a full-featured shell together
into one package.
keywords:
- nushell
- shell
license: MIT

View File

@ -55,7 +55,6 @@ It is good practice to cover your changes with a test. Also, try to think about
Tests can be found in different places:
* `/tests`
* `src/tests`
* command examples
* crate-specific tests
@ -72,11 +71,6 @@ Read cargo's documentation for more details: https://doc.rust-lang.org/cargo/ref
cargo run
```
- Build and run with dataframe support.
```nushell
cargo run --features=dataframe
```
- Run Clippy on Nushell:
```nushell
@ -94,11 +88,6 @@ Read cargo's documentation for more details: https://doc.rust-lang.org/cargo/ref
cargo test --workspace
```
along with dataframe tests
```nushell
cargo test --workspace --features=dataframe
```
or via the `toolkit.nu` command:
```nushell
use toolkit.nu test
@ -241,7 +230,7 @@ You can help us to make the review process a smooth experience:
- Choose what simplifies having confidence in the conflict resolution and the review. **Merge commits in your branch are OK** in the squash model.
- Feel free to notify your reviewers or affected PR authors if your change might cause larger conflicts with another change.
- During the rollup of multiple PRs, we may choose to resolve merge conflicts and CI failures ourselves. (Allow maintainers to push to your branch to enable us to do this quickly.)
## License
We use the [MIT License](https://github.com/nushell/nushell/blob/main/LICENSE) in all of our Nushell projects. If you are including or referencing a crate that uses the [GPL License](https://www.gnu.org/licenses/gpl-3.0.en.html#license-text) unfortunately we will not be able to accept your PR.

1212
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -11,7 +11,7 @@ license = "MIT"
name = "nu"
repository = "https://github.com/nushell/nushell"
rust-version = "1.77.2"
version = "0.93.0"
version = "0.95.0"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
@ -32,7 +32,6 @@ members = [
"crates/nu-cmd-extra",
"crates/nu-cmd-lang",
"crates/nu-cmd-plugin",
"crates/nu-cmd-dataframe",
"crates/nu-command",
"crates/nu-color-config",
"crates/nu-explore",
@ -40,6 +39,7 @@ members = [
"crates/nu-lsp",
"crates/nu-pretty-hex",
"crates/nu-protocol",
"crates/nu-derive-value",
"crates/nu-plugin",
"crates/nu-plugin-core",
"crates/nu-plugin-engine",
@ -64,16 +64,18 @@ members = [
[workspace.dependencies]
alphanumeric-sort = "1.5"
ansi-str = "0.8"
base64 = "0.22"
anyhow = "1.0.82"
base64 = "0.22.1"
bracoxide = "0.1.2"
brotli = "5.0"
byteorder = "1.5"
bytesize = "1.3"
calamine = "0.24.0"
chardetng = "0.1.17"
chrono = { default-features = false, version = "0.4" }
chrono = { default-features = false, version = "0.4.34" }
chrono-humanize = "0.2.3"
chrono-tz = "0.8"
convert_case = "0.6"
crossbeam-channel = "0.5.8"
crossterm = "0.27"
csv = "1.3"
@ -93,7 +95,7 @@ heck = "0.5.0"
human-date-parser = "0.1.1"
indexmap = "2.2"
indicatif = "0.17"
interprocess = "1.2.1"
interprocess = "2.2.0"
is_executable = "1.0"
itertools = "0.12"
libc = "0.2"
@ -118,28 +120,31 @@ num-traits = "0.2"
omnipath = "0.1"
once_cell = "1.18"
open = "5.1"
os_pipe = "1.1"
os_pipe = { version = "1.2", features = ["io_safety"] }
pathdiff = "0.2"
percent-encoding = "2"
pretty_assertions = "1.4"
print-positions = "0.6"
proc-macro-error = { version = "1.0", default-features = false }
proc-macro2 = "1.0"
procfs = "0.16.0"
pwd = "1.3"
quick-xml = "0.31.0"
quickcheck = "1.0"
quickcheck_macros = "1.0"
quote = "1.0"
rand = "0.8"
ratatui = "0.26"
rayon = "1.10"
reedline = "0.32.0"
regex = "1.9.5"
rmp = "0.8"
rmp-serde = "1.2"
rmp-serde = "1.3"
ropey = "1.6.1"
roxmltree = "0.19"
rstest = { version = "0.18", default-features = false }
rusqlite = "0.31"
rust-embed = "8.3.0"
rust-embed = "8.4.0"
same-file = "1.0"
serde = { version = "1.0", default-features = false }
serde_json = "1.0"
@ -147,6 +152,7 @@ serde_urlencoded = "0.7.1"
serde_yaml = "0.9"
sha2 = "0.10"
strip-ansi-escapes = "0.2.0"
syn = "2.0"
sysinfo = "0.30"
tabled = { version = "0.14.0", default-features = false }
tempfile = "3.10"
@ -159,13 +165,13 @@ unicode-segmentation = "1.11"
unicode-width = "0.1"
ureq = { version = "2.9", default-features = false }
url = "2.2"
uu_cp = "0.0.25"
uu_mkdir = "0.0.25"
uu_mktemp = "0.0.25"
uu_mv = "0.0.25"
uu_whoami = "0.0.25"
uu_uname = "0.0.25"
uucore = "0.0.25"
uu_cp = "0.0.26"
uu_mkdir = "0.0.26"
uu_mktemp = "0.0.26"
uu_mv = "0.0.26"
uu_whoami = "0.0.26"
uu_uname = "0.0.26"
uucore = "0.0.26"
uuid = "1.8.0"
v_htmlescape = "0.15.0"
wax = "0.6"
@ -174,33 +180,31 @@ windows = "0.54"
winreg = "0.52"
[dependencies]
nu-cli = { path = "./crates/nu-cli", version = "0.93.0" }
nu-cmd-base = { path = "./crates/nu-cmd-base", version = "0.93.0" }
nu-cmd-lang = { path = "./crates/nu-cmd-lang", version = "0.93.0" }
nu-cmd-plugin = { path = "./crates/nu-cmd-plugin", version = "0.93.0", optional = true }
nu-cmd-dataframe = { path = "./crates/nu-cmd-dataframe", version = "0.93.0", features = [
"dataframe",
], optional = true }
nu-cmd-extra = { path = "./crates/nu-cmd-extra", version = "0.93.0" }
nu-command = { path = "./crates/nu-command", version = "0.93.0" }
nu-engine = { path = "./crates/nu-engine", version = "0.93.0" }
nu-explore = { path = "./crates/nu-explore", version = "0.93.0" }
nu-lsp = { path = "./crates/nu-lsp/", version = "0.93.0" }
nu-parser = { path = "./crates/nu-parser", version = "0.93.0" }
nu-path = { path = "./crates/nu-path", version = "0.93.0" }
nu-plugin-engine = { path = "./crates/nu-plugin-engine", optional = true, version = "0.93.0" }
nu-protocol = { path = "./crates/nu-protocol", version = "0.93.0" }
nu-std = { path = "./crates/nu-std", version = "0.93.0" }
nu-system = { path = "./crates/nu-system", version = "0.93.0" }
nu-utils = { path = "./crates/nu-utils", version = "0.93.0" }
nu-cli = { path = "./crates/nu-cli", version = "0.95.0" }
nu-cmd-base = { path = "./crates/nu-cmd-base", version = "0.95.0" }
nu-cmd-lang = { path = "./crates/nu-cmd-lang", version = "0.95.0" }
nu-cmd-plugin = { path = "./crates/nu-cmd-plugin", version = "0.95.0", optional = true }
nu-cmd-extra = { path = "./crates/nu-cmd-extra", version = "0.95.0" }
nu-command = { path = "./crates/nu-command", version = "0.95.0" }
nu-engine = { path = "./crates/nu-engine", version = "0.95.0" }
nu-explore = { path = "./crates/nu-explore", version = "0.95.0" }
nu-lsp = { path = "./crates/nu-lsp/", version = "0.95.0" }
nu-parser = { path = "./crates/nu-parser", version = "0.95.0" }
nu-path = { path = "./crates/nu-path", version = "0.95.0" }
nu-plugin-engine = { path = "./crates/nu-plugin-engine", optional = true, version = "0.95.0" }
nu-protocol = { path = "./crates/nu-protocol", version = "0.95.0" }
nu-std = { path = "./crates/nu-std", version = "0.95.0" }
nu-system = { path = "./crates/nu-system", version = "0.95.0" }
nu-utils = { path = "./crates/nu-utils", version = "0.95.0" }
reedline = { workspace = true, features = ["bashisms", "sqlite"] }
crossterm = { workspace = true }
ctrlc = { workspace = true }
dirs-next = { workspace = true }
log = { workspace = true }
miette = { workspace = true, features = ["fancy-no-backtrace", "fancy"] }
mimalloc = { version = "0.1.37", default-features = false, optional = true }
mimalloc = { version = "0.1.42", default-features = false, optional = true }
serde_json = { workspace = true }
simplelog = "0.12"
time = "0.3"
@ -221,12 +225,12 @@ nix = { workspace = true, default-features = false, features = [
] }
[dev-dependencies]
nu-test-support = { path = "./crates/nu-test-support", version = "0.93.0" }
nu-plugin-protocol = { path = "./crates/nu-plugin-protocol", version = "0.93.0" }
nu-plugin-core = { path = "./crates/nu-plugin-core", version = "0.93.0" }
nu-test-support = { path = "./crates/nu-test-support", version = "0.95.0" }
nu-plugin-protocol = { path = "./crates/nu-plugin-protocol", version = "0.95.0" }
nu-plugin-core = { path = "./crates/nu-plugin-core", version = "0.95.0" }
assert_cmd = "2.0"
dirs-next = { workspace = true }
divan = "0.1.14"
tango-bench = "0.5"
pretty_assertions = { workspace = true }
rstest = { workspace = true, default-features = false }
serial_test = "3.1"
@ -247,7 +251,6 @@ default = ["default-no-clipboard", "system-clipboard"]
# See https://github.com/nushell/nushell/pull/11535
default-no-clipboard = [
"plugin",
"which-support",
"trash-support",
"sqlite",
"mimalloc",
@ -267,12 +270,8 @@ system-clipboard = [
]
# Stable (Default)
which-support = ["nu-command/which-support", "nu-cmd-lang/which-support"]
trash-support = ["nu-command/trash-support", "nu-cmd-lang/trash-support"]
# Dataframe feature for nushell
dataframe = ["dep:nu-cmd-dataframe", "nu-cmd-lang/dataframe"]
# SQLite commands for nushell
sqlite = ["nu-command/sqlite", "nu-cmd-lang/sqlite"]
@ -311,4 +310,4 @@ bench = false
# Run individual benchmarks like `cargo bench -- <regex>` e.g. `cargo bench -- parse`
[[bench]]
name = "benchmarks"
harness = false
harness = false

View File

@ -52,7 +52,7 @@ To use `Nu` in GitHub Action, check [setup-nu](https://github.com/marketplace/ac
Detailed installation instructions can be found in the [installation chapter of the book](https://www.nushell.sh/book/installation.html). Nu is available via many package managers:
[![Packaging status](https://repology.org/badge/vertical-allrepos/nushell.svg)](https://repology.org/project/nushell/versions)
[![Packaging status](https://repology.org/badge/vertical-allrepos/nushell.svg?columns=3)](https://repology.org/project/nushell/versions)
For details about which platforms the Nushell team actively supports, see [our platform support policy](devdocs/PLATFORM_SUPPORT.md).
@ -222,6 +222,7 @@ Please submit an issue or PR to be added to this list.
- [clap](https://github.com/clap-rs/clap/tree/master/clap_complete_nushell)
- [Dorothy](http://github.com/bevry/dorothy)
- [Direnv](https://github.com/direnv/direnv/blob/master/docs/hook.md#nushell)
- [x-cmd](https://x-cmd.com/mod/nu)
## Contributing

View File

@ -1,97 +1,39 @@
use nu_cli::{eval_source, evaluate_commands};
use nu_parser::parse;
use nu_plugin_core::{Encoder, EncodingType};
use nu_plugin_protocol::{PluginCallResponse, PluginOutput};
use nu_protocol::{
engine::{EngineState, Stack},
eval_const::create_nu_constant,
PipelineData, Span, Spanned, Value, NU_VARIABLE_ID,
PipelineData, Span, Spanned, Value,
};
use nu_std::load_standard_library;
use nu_utils::{get_default_config, get_default_env};
use std::path::{Path, PathBuf};
use std::rc::Rc;
fn main() {
// Run registered benchmarks.
divan::main();
}
use std::hint::black_box;
use tango_bench::{benchmark_fn, tango_benchmarks, tango_main, IntoBenchmarks};
fn load_bench_commands() -> EngineState {
nu_command::add_shell_command_context(nu_cmd_lang::create_default_context())
}
fn canonicalize_path(engine_state: &EngineState, path: &Path) -> PathBuf {
let cwd = engine_state.current_work_dir();
if path.exists() {
match nu_path::canonicalize_with(path, cwd) {
Ok(canon_path) => canon_path,
Err(_) => path.to_owned(),
}
} else {
path.to_owned()
}
}
fn get_home_path(engine_state: &EngineState) -> PathBuf {
nu_path::home_dir()
.map(|path| canonicalize_path(engine_state, &path))
.unwrap_or_default()
}
fn setup_engine() -> EngineState {
let mut engine_state = load_bench_commands();
let home_path = get_home_path(&engine_state);
let cwd = std::env::current_dir()
.unwrap()
.into_os_string()
.into_string()
.unwrap();
// parsing config.nu breaks without PWD set, so set a valid path
engine_state.add_env_var(
"PWD".into(),
Value::string(home_path.to_string_lossy(), Span::test_data()),
);
engine_state.add_env_var("PWD".into(), Value::string(cwd, Span::test_data()));
let nu_const = create_nu_constant(&engine_state, Span::unknown())
.expect("Failed to create nushell constant.");
engine_state.set_variable_const_val(NU_VARIABLE_ID, nu_const);
engine_state.generate_nu_constant();
engine_state
}
fn bench_command(bencher: divan::Bencher, scaled_command: String) {
bench_command_with_custom_stack_and_engine(
bencher,
scaled_command,
Stack::new(),
setup_engine(),
)
}
fn bench_command_with_custom_stack_and_engine(
bencher: divan::Bencher,
scaled_command: String,
stack: nu_protocol::engine::Stack,
mut engine: EngineState,
) {
load_standard_library(&mut engine).unwrap();
let commands = Spanned {
span: Span::unknown(),
item: scaled_command,
};
bencher
.with_inputs(|| engine.clone())
.bench_values(|mut engine| {
evaluate_commands(
&commands,
&mut engine,
&mut stack.clone(),
PipelineData::empty(),
None,
false,
)
.unwrap();
})
}
fn setup_stack_and_engine_from_command(command: &str) -> (Stack, EngineState) {
let mut engine = setup_engine();
let commands = Spanned {
@ -105,269 +47,13 @@ fn setup_stack_and_engine_from_command(command: &str) -> (Stack, EngineState) {
&mut engine,
&mut stack,
PipelineData::empty(),
None,
false,
Default::default(),
)
.unwrap();
(stack, engine)
}
// FIXME: All benchmarks live in this 1 file to speed up build times when benchmarking.
// When the *_benchmarks functions were in different files, `cargo bench` would build
// an executable for every single one - incredibly slowly. Would be nice to figure out
// a way to split things up again.
#[divan::bench]
fn load_standard_lib(bencher: divan::Bencher) {
let engine = setup_engine();
bencher
.with_inputs(|| engine.clone())
.bench_values(|mut engine| {
load_standard_library(&mut engine).unwrap();
})
}
#[divan::bench_group]
mod record {
use super::*;
fn create_flat_record_string(n: i32) -> String {
let mut s = String::from("let record = {");
for i in 0..n {
s.push_str(&format!("col_{}: {}", i, i));
if i < n - 1 {
s.push_str(", ");
}
}
s.push('}');
s
}
fn create_nested_record_string(depth: i32) -> String {
let mut s = String::from("let record = {");
for _ in 0..depth {
s.push_str("col: {");
}
s.push_str("col_final: 0");
for _ in 0..depth {
s.push('}');
}
s.push('}');
s
}
#[divan::bench(args = [1, 10, 100, 1000])]
fn create(bencher: divan::Bencher, n: i32) {
bench_command(bencher, create_flat_record_string(n));
}
#[divan::bench(args = [1, 10, 100, 1000])]
fn flat_access(bencher: divan::Bencher, n: i32) {
let (stack, engine) = setup_stack_and_engine_from_command(&create_flat_record_string(n));
bench_command_with_custom_stack_and_engine(
bencher,
"$record.col_0 | ignore".to_string(),
stack,
engine,
);
}
#[divan::bench(args = [1, 2, 4, 8, 16, 32, 64, 128])]
fn nest_access(bencher: divan::Bencher, depth: i32) {
let (stack, engine) =
setup_stack_and_engine_from_command(&create_nested_record_string(depth));
let nested_access = ".col".repeat(depth as usize);
bench_command_with_custom_stack_and_engine(
bencher,
format!("$record{} | ignore", nested_access),
stack,
engine,
);
}
}
#[divan::bench_group]
mod table {
use super::*;
fn create_example_table_nrows(n: i32) -> String {
let mut s = String::from("let table = [[foo bar baz]; ");
for i in 0..n {
s.push_str(&format!("[0, 1, {i}]"));
if i < n - 1 {
s.push_str(", ");
}
}
s.push(']');
s
}
#[divan::bench(args = [1, 10, 100, 1000])]
fn create(bencher: divan::Bencher, n: i32) {
bench_command(bencher, create_example_table_nrows(n));
}
#[divan::bench(args = [1, 10, 100, 1000])]
fn get(bencher: divan::Bencher, n: i32) {
let (stack, engine) = setup_stack_and_engine_from_command(&create_example_table_nrows(n));
bench_command_with_custom_stack_and_engine(
bencher,
"$table | get bar | math sum | ignore".to_string(),
stack,
engine,
);
}
#[divan::bench(args = [1, 10, 100, 1000])]
fn select(bencher: divan::Bencher, n: i32) {
let (stack, engine) = setup_stack_and_engine_from_command(&create_example_table_nrows(n));
bench_command_with_custom_stack_and_engine(
bencher,
"$table | select foo baz | ignore".to_string(),
stack,
engine,
);
}
}
#[divan::bench_group]
mod eval_commands {
use super::*;
#[divan::bench(args = [100, 1_000, 10_000])]
fn interleave(bencher: divan::Bencher, n: i32) {
bench_command(
bencher,
format!("seq 1 {n} | wrap a | interleave {{ seq 1 {n} | wrap b }} | ignore"),
)
}
#[divan::bench(args = [100, 1_000, 10_000])]
fn interleave_with_ctrlc(bencher: divan::Bencher, n: i32) {
let mut engine = setup_engine();
engine.ctrlc = Some(std::sync::Arc::new(std::sync::atomic::AtomicBool::new(
false,
)));
load_standard_library(&mut engine).unwrap();
let commands = Spanned {
span: Span::unknown(),
item: format!("seq 1 {n} | wrap a | interleave {{ seq 1 {n} | wrap b }} | ignore"),
};
bencher
.with_inputs(|| engine.clone())
.bench_values(|mut engine| {
evaluate_commands(
&commands,
&mut engine,
&mut nu_protocol::engine::Stack::new(),
PipelineData::empty(),
None,
false,
)
.unwrap();
})
}
#[divan::bench(args = [1, 5, 10, 100, 1_000])]
fn for_range(bencher: divan::Bencher, n: i32) {
bench_command(bencher, format!("(for $x in (1..{}) {{ sleep 50ns }})", n))
}
#[divan::bench(args = [1, 5, 10, 100, 1_000])]
fn each(bencher: divan::Bencher, n: i32) {
bench_command(
bencher,
format!("(1..{}) | each {{|_| sleep 50ns }} | ignore", n),
)
}
#[divan::bench(args = [1, 5, 10, 100, 1_000])]
fn par_each_1t(bencher: divan::Bencher, n: i32) {
bench_command(
bencher,
format!("(1..{}) | par-each -t 1 {{|_| sleep 50ns }} | ignore", n),
)
}
#[divan::bench(args = [1, 5, 10, 100, 1_000])]
fn par_each_2t(bencher: divan::Bencher, n: i32) {
bench_command(
bencher,
format!("(1..{}) | par-each -t 2 {{|_| sleep 50ns }} | ignore", n),
)
}
}
#[divan::bench_group()]
mod parser_benchmarks {
use super::*;
#[divan::bench()]
fn parse_default_config_file(bencher: divan::Bencher) {
let engine_state = setup_engine();
let default_env = get_default_config().as_bytes();
bencher
.with_inputs(|| nu_protocol::engine::StateWorkingSet::new(&engine_state))
.bench_refs(|working_set| parse(working_set, None, default_env, false))
}
#[divan::bench()]
fn parse_default_env_file(bencher: divan::Bencher) {
let engine_state = setup_engine();
let default_env = get_default_env().as_bytes();
bencher
.with_inputs(|| nu_protocol::engine::StateWorkingSet::new(&engine_state))
.bench_refs(|working_set| parse(working_set, None, default_env, false))
}
}
#[divan::bench_group()]
mod eval_benchmarks {
use super::*;
#[divan::bench()]
fn eval_default_env(bencher: divan::Bencher) {
let default_env = get_default_env().as_bytes();
let fname = "default_env.nu";
bencher
.with_inputs(|| (setup_engine(), nu_protocol::engine::Stack::new()))
.bench_values(|(mut engine_state, mut stack)| {
eval_source(
&mut engine_state,
&mut stack,
default_env,
fname,
PipelineData::empty(),
false,
)
})
}
#[divan::bench()]
fn eval_default_config(bencher: divan::Bencher) {
let default_env = get_default_config().as_bytes();
let fname = "default_config.nu";
bencher
.with_inputs(|| (setup_engine(), nu_protocol::engine::Stack::new()))
.bench_values(|(mut engine_state, mut stack)| {
eval_source(
&mut engine_state,
&mut stack,
default_env,
fname,
PipelineData::empty(),
false,
)
})
}
}
// generate a new table data with `row_cnt` rows, `col_cnt` columns.
fn encoding_test_data(row_cnt: usize, col_cnt: usize) -> Value {
let record = Value::test_record(
@ -379,76 +65,423 @@ fn encoding_test_data(row_cnt: usize, col_cnt: usize) -> Value {
Value::list(vec![record; row_cnt], Span::test_data())
}
#[divan::bench_group()]
mod encoding_benchmarks {
use super::*;
#[divan::bench(args = [(100, 5), (10000, 15)])]
fn json_encode(bencher: divan::Bencher, (row_cnt, col_cnt): (usize, usize)) {
let test_data = PluginOutput::CallResponse(
0,
PluginCallResponse::value(encoding_test_data(row_cnt, col_cnt)),
);
let encoder = EncodingType::try_from_bytes(b"json").unwrap();
bencher
.with_inputs(Vec::new)
.bench_values(|mut res| encoder.encode(&test_data, &mut res))
}
#[divan::bench(args = [(100, 5), (10000, 15)])]
fn msgpack_encode(bencher: divan::Bencher, (row_cnt, col_cnt): (usize, usize)) {
let test_data = PluginOutput::CallResponse(
0,
PluginCallResponse::value(encoding_test_data(row_cnt, col_cnt)),
);
let encoder = EncodingType::try_from_bytes(b"msgpack").unwrap();
bencher
.with_inputs(Vec::new)
.bench_values(|mut res| encoder.encode(&test_data, &mut res))
}
fn bench_command(
name: &str,
command: &str,
stack: Stack,
engine: EngineState,
) -> impl IntoBenchmarks {
let commands = Spanned {
span: Span::unknown(),
item: command.to_string(),
};
[benchmark_fn(name, move |b| {
let commands = commands.clone();
let stack = stack.clone();
let engine = engine.clone();
b.iter(move || {
let mut stack = stack.clone();
let mut engine = engine.clone();
#[allow(clippy::unit_arg)]
black_box(
evaluate_commands(
&commands,
&mut engine,
&mut stack,
PipelineData::empty(),
Default::default(),
)
.unwrap(),
);
})
})]
}
#[divan::bench_group()]
mod decoding_benchmarks {
use super::*;
#[divan::bench(args = [(100, 5), (10000, 15)])]
fn json_decode(bencher: divan::Bencher, (row_cnt, col_cnt): (usize, usize)) {
let test_data = PluginOutput::CallResponse(
0,
PluginCallResponse::value(encoding_test_data(row_cnt, col_cnt)),
);
let encoder = EncodingType::try_from_bytes(b"json").unwrap();
let mut res = vec![];
encoder.encode(&test_data, &mut res).unwrap();
bencher
.with_inputs(|| {
let mut binary_data = std::io::Cursor::new(res.clone());
binary_data.set_position(0);
binary_data
})
.bench_values(|mut binary_data| -> Result<Option<PluginOutput>, _> {
encoder.decode(&mut binary_data)
})
}
#[divan::bench(args = [(100, 5), (10000, 15)])]
fn msgpack_decode(bencher: divan::Bencher, (row_cnt, col_cnt): (usize, usize)) {
let test_data = PluginOutput::CallResponse(
0,
PluginCallResponse::value(encoding_test_data(row_cnt, col_cnt)),
);
let encoder = EncodingType::try_from_bytes(b"msgpack").unwrap();
let mut res = vec![];
encoder.encode(&test_data, &mut res).unwrap();
bencher
.with_inputs(|| {
let mut binary_data = std::io::Cursor::new(res.clone());
binary_data.set_position(0);
binary_data
})
.bench_values(|mut binary_data| -> Result<Option<PluginOutput>, _> {
encoder.decode(&mut binary_data)
})
}
fn bench_eval_source(
name: &str,
fname: String,
source: Vec<u8>,
stack: Stack,
engine: EngineState,
) -> impl IntoBenchmarks {
[benchmark_fn(name, move |b| {
let stack = stack.clone();
let engine = engine.clone();
let fname = fname.clone();
let source = source.clone();
b.iter(move || {
let mut stack = stack.clone();
let mut engine = engine.clone();
let fname: &str = &fname.clone();
let source: &[u8] = &source.clone();
black_box(eval_source(
&mut engine,
&mut stack,
source,
fname,
PipelineData::empty(),
false,
));
})
})]
}
/// Load the standard library into the engine.
fn bench_load_standard_lib() -> impl IntoBenchmarks {
[benchmark_fn("load_standard_lib", move |b| {
let engine = setup_engine();
b.iter(move || {
let mut engine = engine.clone();
load_standard_library(&mut engine)
})
})]
}
fn create_flat_record_string(n: i32) -> String {
let mut s = String::from("let record = {");
for i in 0..n {
s.push_str(&format!("col_{}: {}", i, i));
if i < n - 1 {
s.push_str(", ");
}
}
s.push('}');
s
}
fn create_nested_record_string(depth: i32) -> String {
let mut s = String::from("let record = {");
for _ in 0..depth {
s.push_str("col: {");
}
s.push_str("col_final: 0");
for _ in 0..depth {
s.push('}');
}
s.push('}');
s
}
fn create_example_table_nrows(n: i32) -> String {
let mut s = String::from("let table = [[foo bar baz]; ");
for i in 0..n {
s.push_str(&format!("[0, 1, {i}]"));
if i < n - 1 {
s.push_str(", ");
}
}
s.push(']');
s
}
fn bench_record_create(n: i32) -> impl IntoBenchmarks {
bench_command(
&format!("record_create_{n}"),
&create_flat_record_string(n),
Stack::new(),
setup_engine(),
)
}
fn bench_record_flat_access(n: i32) -> impl IntoBenchmarks {
let setup_command = create_flat_record_string(n);
let (stack, engine) = setup_stack_and_engine_from_command(&setup_command);
bench_command(
&format!("record_flat_access_{n}"),
"$record.col_0 | ignore",
stack,
engine,
)
}
fn bench_record_nested_access(n: i32) -> impl IntoBenchmarks {
let setup_command = create_nested_record_string(n);
let (stack, engine) = setup_stack_and_engine_from_command(&setup_command);
let nested_access = ".col".repeat(n as usize);
bench_command(
&format!("record_nested_access_{n}"),
&format!("$record{} | ignore", nested_access),
stack,
engine,
)
}
fn bench_table_create(n: i32) -> impl IntoBenchmarks {
bench_command(
&format!("table_create_{n}"),
&create_example_table_nrows(n),
Stack::new(),
setup_engine(),
)
}
fn bench_table_get(n: i32) -> impl IntoBenchmarks {
let setup_command = create_example_table_nrows(n);
let (stack, engine) = setup_stack_and_engine_from_command(&setup_command);
bench_command(
&format!("table_get_{n}"),
"$table | get bar | math sum | ignore",
stack,
engine,
)
}
fn bench_table_select(n: i32) -> impl IntoBenchmarks {
let setup_command = create_example_table_nrows(n);
let (stack, engine) = setup_stack_and_engine_from_command(&setup_command);
bench_command(
&format!("table_select_{n}"),
"$table | select foo baz | ignore",
stack,
engine,
)
}
fn bench_eval_interleave(n: i32) -> impl IntoBenchmarks {
let engine = setup_engine();
let stack = Stack::new();
bench_command(
&format!("eval_interleave_{n}"),
&format!("seq 1 {n} | wrap a | interleave {{ seq 1 {n} | wrap b }} | ignore"),
stack,
engine,
)
}
fn bench_eval_interleave_with_ctrlc(n: i32) -> impl IntoBenchmarks {
let mut engine = setup_engine();
engine.ctrlc = Some(std::sync::Arc::new(std::sync::atomic::AtomicBool::new(
false,
)));
let stack = Stack::new();
bench_command(
&format!("eval_interleave_with_ctrlc_{n}"),
&format!("seq 1 {n} | wrap a | interleave {{ seq 1 {n} | wrap b }} | ignore"),
stack,
engine,
)
}
fn bench_eval_for(n: i32) -> impl IntoBenchmarks {
let engine = setup_engine();
let stack = Stack::new();
bench_command(
&format!("eval_for_{n}"),
&format!("(for $x in (1..{n}) {{ 1 }}) | ignore"),
stack,
engine,
)
}
fn bench_eval_each(n: i32) -> impl IntoBenchmarks {
let engine = setup_engine();
let stack = Stack::new();
bench_command(
&format!("eval_each_{n}"),
&format!("(1..{n}) | each {{|_| 1 }} | ignore"),
stack,
engine,
)
}
fn bench_eval_par_each(n: i32) -> impl IntoBenchmarks {
let engine = setup_engine();
let stack = Stack::new();
bench_command(
&format!("eval_par_each_{n}"),
&format!("(1..{}) | par-each -t 2 {{|_| 1 }} | ignore", n),
stack,
engine,
)
}
fn bench_eval_default_config() -> impl IntoBenchmarks {
let default_env = get_default_config().as_bytes().to_vec();
let fname = "default_config.nu".to_string();
bench_eval_source(
"eval_default_config",
fname,
default_env,
Stack::new(),
setup_engine(),
)
}
fn bench_eval_default_env() -> impl IntoBenchmarks {
let default_env = get_default_env().as_bytes().to_vec();
let fname = "default_env.nu".to_string();
bench_eval_source(
"eval_default_env",
fname,
default_env,
Stack::new(),
setup_engine(),
)
}
fn encode_json(row_cnt: usize, col_cnt: usize) -> impl IntoBenchmarks {
let test_data = Rc::new(PluginOutput::CallResponse(
0,
PluginCallResponse::value(encoding_test_data(row_cnt, col_cnt)),
));
let encoder = Rc::new(EncodingType::try_from_bytes(b"json").unwrap());
[benchmark_fn(
format!("encode_json_{}_{}", row_cnt, col_cnt),
move |b| {
let encoder = encoder.clone();
let test_data = test_data.clone();
b.iter(move || {
let mut res = Vec::new();
encoder.encode(&*test_data, &mut res).unwrap();
})
},
)]
}
fn encode_msgpack(row_cnt: usize, col_cnt: usize) -> impl IntoBenchmarks {
let test_data = Rc::new(PluginOutput::CallResponse(
0,
PluginCallResponse::value(encoding_test_data(row_cnt, col_cnt)),
));
let encoder = Rc::new(EncodingType::try_from_bytes(b"msgpack").unwrap());
[benchmark_fn(
format!("encode_msgpack_{}_{}", row_cnt, col_cnt),
move |b| {
let encoder = encoder.clone();
let test_data = test_data.clone();
b.iter(move || {
let mut res = Vec::new();
encoder.encode(&*test_data, &mut res).unwrap();
})
},
)]
}
fn decode_json(row_cnt: usize, col_cnt: usize) -> impl IntoBenchmarks {
let test_data = PluginOutput::CallResponse(
0,
PluginCallResponse::value(encoding_test_data(row_cnt, col_cnt)),
);
let encoder = EncodingType::try_from_bytes(b"json").unwrap();
let mut res = vec![];
encoder.encode(&test_data, &mut res).unwrap();
[benchmark_fn(
format!("decode_json_{}_{}", row_cnt, col_cnt),
move |b| {
let res = res.clone();
b.iter(move || {
let mut binary_data = std::io::Cursor::new(res.clone());
binary_data.set_position(0);
let _: Result<Option<PluginOutput>, _> =
black_box(encoder.decode(&mut binary_data));
})
},
)]
}
fn decode_msgpack(row_cnt: usize, col_cnt: usize) -> impl IntoBenchmarks {
let test_data = PluginOutput::CallResponse(
0,
PluginCallResponse::value(encoding_test_data(row_cnt, col_cnt)),
);
let encoder = EncodingType::try_from_bytes(b"msgpack").unwrap();
let mut res = vec![];
encoder.encode(&test_data, &mut res).unwrap();
[benchmark_fn(
format!("decode_msgpack_{}_{}", row_cnt, col_cnt),
move |b| {
let res = res.clone();
b.iter(move || {
let mut binary_data = std::io::Cursor::new(res.clone());
binary_data.set_position(0);
let _: Result<Option<PluginOutput>, _> =
black_box(encoder.decode(&mut binary_data));
})
},
)]
}
tango_benchmarks!(
bench_load_standard_lib(),
// Data types
// Record
bench_record_create(1),
bench_record_create(10),
bench_record_create(100),
bench_record_create(1_000),
bench_record_flat_access(1),
bench_record_flat_access(10),
bench_record_flat_access(100),
bench_record_flat_access(1_000),
bench_record_nested_access(1),
bench_record_nested_access(2),
bench_record_nested_access(4),
bench_record_nested_access(8),
bench_record_nested_access(16),
bench_record_nested_access(32),
bench_record_nested_access(64),
bench_record_nested_access(128),
// Table
bench_table_create(1),
bench_table_create(10),
bench_table_create(100),
bench_table_create(1_000),
bench_table_get(1),
bench_table_get(10),
bench_table_get(100),
bench_table_get(1_000),
bench_table_select(1),
bench_table_select(10),
bench_table_select(100),
bench_table_select(1_000),
// Eval
// Interleave
bench_eval_interleave(100),
bench_eval_interleave(1_000),
bench_eval_interleave(10_000),
bench_eval_interleave_with_ctrlc(100),
bench_eval_interleave_with_ctrlc(1_000),
bench_eval_interleave_with_ctrlc(10_000),
// For
bench_eval_for(1),
bench_eval_for(10),
bench_eval_for(100),
bench_eval_for(1_000),
bench_eval_for(10_000),
// Each
bench_eval_each(1),
bench_eval_each(10),
bench_eval_each(100),
bench_eval_each(1_000),
bench_eval_each(10_000),
// Par-Each
bench_eval_par_each(1),
bench_eval_par_each(10),
bench_eval_par_each(100),
bench_eval_par_each(1_000),
bench_eval_par_each(10_000),
// Config
bench_eval_default_config(),
// Env
bench_eval_default_env(),
// Encode
// Json
encode_json(100, 5),
encode_json(10000, 15),
// MsgPack
encode_msgpack(100, 5),
encode_msgpack(10000, 15),
// Decode
// Json
decode_json(100, 5),
decode_json(10000, 15),
// MsgPack
decode_msgpack(100, 5),
decode_msgpack(10000, 15)
);
tango_main!();

View File

@ -5,26 +5,27 @@ repository = "https://github.com/nushell/nushell/tree/main/crates/nu-cli"
edition = "2021"
license = "MIT"
name = "nu-cli"
version = "0.93.0"
version = "0.95.0"
[lib]
bench = false
[dev-dependencies]
nu-cmd-lang = { path = "../nu-cmd-lang", version = "0.93.0" }
nu-command = { path = "../nu-command", version = "0.93.0" }
nu-test-support = { path = "../nu-test-support", version = "0.93.0" }
nu-cmd-lang = { path = "../nu-cmd-lang", version = "0.95.0" }
nu-command = { path = "../nu-command", version = "0.95.0" }
nu-test-support = { path = "../nu-test-support", version = "0.95.0" }
rstest = { workspace = true, default-features = false }
tempfile = { workspace = true }
[dependencies]
nu-cmd-base = { path = "../nu-cmd-base", version = "0.93.0" }
nu-engine = { path = "../nu-engine", version = "0.93.0" }
nu-path = { path = "../nu-path", version = "0.93.0" }
nu-parser = { path = "../nu-parser", version = "0.93.0" }
nu-plugin-engine = { path = "../nu-plugin-engine", version = "0.93.0", optional = true }
nu-protocol = { path = "../nu-protocol", version = "0.93.0" }
nu-utils = { path = "../nu-utils", version = "0.93.0" }
nu-color-config = { path = "../nu-color-config", version = "0.93.0" }
nu-cmd-base = { path = "../nu-cmd-base", version = "0.95.0" }
nu-engine = { path = "../nu-engine", version = "0.95.0" }
nu-path = { path = "../nu-path", version = "0.95.0" }
nu-parser = { path = "../nu-parser", version = "0.95.0" }
nu-plugin-engine = { path = "../nu-plugin-engine", version = "0.95.0", optional = true }
nu-protocol = { path = "../nu-protocol", version = "0.95.0" }
nu-utils = { path = "../nu-utils", version = "0.95.0" }
nu-color-config = { path = "../nu-color-config", version = "0.95.0" }
nu-ansi-term = { workspace = true }
reedline = { workspace = true, features = ["bashisms", "sqlite"] }
@ -38,7 +39,6 @@ miette = { workspace = true, features = ["fancy-no-backtrace"] }
lscolors = { workspace = true, default-features = false, features = ["nu-ansi-term"] }
once_cell = { workspace = true }
percent-encoding = { workspace = true }
pathdiff = { workspace = true }
sysinfo = { workspace = true }
unicode-segmentation = { workspace = true }
uuid = { workspace = true, features = ["v4"] }
@ -46,4 +46,4 @@ which = { workspace = true }
[features]
plugin = ["nu-plugin-engine"]
system-clipboard = ["reedline/system_clipboard"]
system-clipboard = ["reedline/system_clipboard"]

View File

@ -107,7 +107,7 @@ impl Command for History {
file: history_path.display().to_string(),
span: head,
})?
.into_pipeline_data(ctrlc)),
.into_pipeline_data(head, ctrlc)),
HistoryFileFormat::Sqlite => Ok(history_reader
.and_then(|h| {
h.search(SearchQuery::everything(SearchDirection::Forward, None))
@ -122,7 +122,7 @@ impl Command for History {
file: history_path.display().to_string(),
span: head,
})?
.into_pipeline_data(ctrlc)),
.into_pipeline_data(head, ctrlc)),
}
}
} else {

View File

@ -36,16 +36,6 @@ For more information on input and keybindings, check:
call: &Call,
_input: PipelineData,
) -> Result<PipelineData, ShellError> {
Ok(Value::string(
get_full_help(
&Keybindings.signature(),
&Keybindings.examples(),
engine_state,
stack,
self.is_parser_keyword(),
),
call.head,
)
.into_pipeline_data())
Ok(Value::string(get_full_help(self, engine_state, stack), call.head).into_pipeline_data())
}
}

View File

@ -1,13 +1,18 @@
use crate::completions::{CompletionOptions, SortBy};
use nu_protocol::{engine::StateWorkingSet, levenshtein_distance, Span};
use nu_protocol::{
engine::{Stack, StateWorkingSet},
levenshtein_distance, Span,
};
use reedline::Suggestion;
// Completer trait represents the three stages of the completion
// fetch, filter and sort
pub trait Completer {
#[allow(clippy::too_many_arguments)]
fn fetch(
&mut self,
working_set: &StateWorkingSet,
stack: &Stack,
prefix: Vec<u8>,
span: Span,
offset: usize,

View File

@ -4,16 +4,14 @@ use crate::{
};
use nu_parser::FlatShape;
use nu_protocol::{
engine::{CachedFile, EngineState, StateWorkingSet},
engine::{CachedFile, Stack, StateWorkingSet},
Span,
};
use reedline::Suggestion;
use std::sync::Arc;
use super::SemanticSuggestion;
pub struct CommandCompletion {
engine_state: Arc<EngineState>,
flattened: Vec<(Span, FlatShape)>,
flat_shape: FlatShape,
force_completion_after_space: bool,
@ -21,14 +19,11 @@ pub struct CommandCompletion {
impl CommandCompletion {
pub fn new(
engine_state: Arc<EngineState>,
_: &StateWorkingSet,
flattened: Vec<(Span, FlatShape)>,
flat_shape: FlatShape,
force_completion_after_space: bool,
) -> Self {
Self {
engine_state,
flattened,
flat_shape,
force_completion_after_space,
@ -37,13 +32,14 @@ impl CommandCompletion {
fn external_command_completion(
&self,
working_set: &StateWorkingSet,
prefix: &str,
match_algorithm: MatchAlgorithm,
) -> Vec<String> {
let mut executables = vec![];
// os agnostic way to get the PATH env var
let paths = self.engine_state.get_path_env_var();
let paths = working_set.permanent_state.get_path_env_var();
if let Some(paths) = paths {
if let Ok(paths) = paths.as_list() {
@ -52,7 +48,10 @@ impl CommandCompletion {
if let Ok(mut contents) = std::fs::read_dir(path.as_ref()) {
while let Some(Ok(item)) = contents.next() {
if self.engine_state.config.max_external_completion_results
if working_set
.permanent_state
.config
.max_external_completion_results
> executables.len() as i64
&& !executables.contains(
&item
@ -114,7 +113,7 @@ impl CommandCompletion {
if find_externals {
let results_external = self
.external_command_completion(&partial, match_algorithm)
.external_command_completion(working_set, &partial, match_algorithm)
.into_iter()
.map(move |x| SemanticSuggestion {
suggestion: Suggestion {
@ -161,6 +160,7 @@ impl Completer for CommandCompletion {
fn fetch(
&mut self,
working_set: &StateWorkingSet,
_stack: &Stack,
_prefix: Vec<u8>,
span: Span,
offset: usize,
@ -266,6 +266,8 @@ pub fn is_passthrough_command(working_set_file_contents: &[CachedFile]) -> bool
#[cfg(test)]
mod command_completions_tests {
use super::*;
use nu_protocol::engine::EngineState;
use std::sync::Arc;
#[test]
fn test_find_non_whitespace_index() {

View File

@ -22,10 +22,10 @@ pub struct NuCompleter {
}
impl NuCompleter {
pub fn new(engine_state: Arc<EngineState>, stack: Stack) -> Self {
pub fn new(engine_state: Arc<EngineState>, stack: Arc<Stack>) -> Self {
Self {
engine_state,
stack: stack.reset_out_dest().capture(),
stack: Stack::with_parent(stack).reset_out_dest().capture(),
}
}
@ -52,8 +52,15 @@ impl NuCompleter {
};
// Fetch
let mut suggestions =
completer.fetch(working_set, prefix.clone(), new_span, offset, pos, &options);
let mut suggestions = completer.fetch(
working_set,
&self.stack,
prefix.clone(),
new_span,
offset,
pos,
&options,
);
// Sort
suggestions = completer.sort(suggestions, prefix);
@ -96,9 +103,8 @@ impl NuCompleter {
PipelineData::empty(),
);
match result {
Ok(pd) => {
let value = pd.into_value(span);
match result.and_then(|data| data.into_value(span)) {
Ok(value) => {
if let Value::List { vals, .. } = value {
let result =
map_value_completions(vals.iter(), Span::new(span.start, span.end), offset);
@ -175,11 +181,8 @@ impl NuCompleter {
// Variables completion
if prefix.starts_with(b"$") || most_left_var.is_some() {
let mut completer = VariableCompletion::new(
self.engine_state.clone(),
self.stack.clone(),
most_left_var.unwrap_or((vec![], vec![])),
);
let mut completer =
VariableCompletion::new(most_left_var.unwrap_or((vec![], vec![])));
return self.process_completion(
&mut completer,
@ -224,8 +227,6 @@ impl NuCompleter {
|| (flat_idx == 0 && working_set.get_span_contents(new_span).is_empty())
{
let mut completer = CommandCompletion::new(
self.engine_state.clone(),
&working_set,
flattened.clone(),
// flat_idx,
FlatShape::String,
@ -253,10 +254,7 @@ impl NuCompleter {
|| prev_expr_str == b"overlay use"
|| prev_expr_str == b"source-env"
{
let mut completer = DotNuCompletion::new(
self.engine_state.clone(),
self.stack.clone(),
);
let mut completer = DotNuCompletion::new();
return self.process_completion(
&mut completer,
@ -267,10 +265,7 @@ impl NuCompleter {
pos,
);
} else if prev_expr_str == b"ls" {
let mut completer = FileCompletion::new(
self.engine_state.clone(),
self.stack.clone(),
);
let mut completer = FileCompletion::new();
return self.process_completion(
&mut completer,
@ -288,7 +283,6 @@ impl NuCompleter {
match &flat.1 {
FlatShape::Custom(decl_id) => {
let mut completer = CustomCompletion::new(
self.engine_state.clone(),
self.stack.clone(),
*decl_id,
initial_line,
@ -304,10 +298,7 @@ impl NuCompleter {
);
}
FlatShape::Directory => {
let mut completer = DirectoryCompletion::new(
self.engine_state.clone(),
self.stack.clone(),
);
let mut completer = DirectoryCompletion::new();
return self.process_completion(
&mut completer,
@ -319,10 +310,7 @@ impl NuCompleter {
);
}
FlatShape::Filepath | FlatShape::GlobPattern => {
let mut completer = FileCompletion::new(
self.engine_state.clone(),
self.stack.clone(),
);
let mut completer = FileCompletion::new();
return self.process_completion(
&mut completer,
@ -335,8 +323,6 @@ impl NuCompleter {
}
flat_shape => {
let mut completer = CommandCompletion::new(
self.engine_state.clone(),
&working_set,
flattened.clone(),
// flat_idx,
flat_shape.clone(),
@ -369,10 +355,7 @@ impl NuCompleter {
}
// Check for file completion
let mut completer = FileCompletion::new(
self.engine_state.clone(),
self.stack.clone(),
);
let mut completer = FileCompletion::new();
out = self.process_completion(
&mut completer,
&working_set,
@ -557,7 +540,7 @@ mod completer_tests {
result.err().unwrap()
);
let mut completer = NuCompleter::new(engine_state.into(), Stack::new());
let mut completer = NuCompleter::new(engine_state.into(), Arc::new(Stack::new()));
let dataset = [
("sudo", false, "", Vec::new()),
("sudo l", true, "l", vec!["ls", "let", "lines", "loop"]),

View File

@ -1,46 +1,71 @@
use crate::completions::{matches, CompletionOptions};
use nu_ansi_term::Style;
use nu_engine::env_to_string;
use nu_path::home_dir;
use nu_path::{expand_to_real_path, home_dir};
use nu_protocol::{
engine::{EngineState, Stack, StateWorkingSet},
Span,
};
use nu_utils::get_ls_colors;
use std::{
ffi::OsStr,
path::{is_separator, Component, Path, PathBuf, MAIN_SEPARATOR as SEP},
use std::path::{
is_separator, Component, Path, PathBuf, MAIN_SEPARATOR as SEP, MAIN_SEPARATOR_STR,
};
#[derive(Clone, Default)]
pub struct PathBuiltFromString {
parts: Vec<String>,
isdir: bool,
}
fn complete_rec(
partial: &[String],
partial: &[&str],
built: &PathBuiltFromString,
cwd: &Path,
options: &CompletionOptions,
dir: bool,
isdir: bool,
) -> Vec<PathBuf> {
) -> Vec<PathBuiltFromString> {
let mut completions = vec![];
if let Ok(result) = cwd.read_dir() {
for entry in result.filter_map(|e| e.ok()) {
let entry_name = entry.file_name().to_string_lossy().into_owned();
let path = entry.path();
if let Some((&base, rest)) = partial.split_first() {
if (base == "." || base == "..") && (isdir || !rest.is_empty()) {
let mut built = built.clone();
built.parts.push(base.to_string());
built.isdir = true;
return complete_rec(rest, &built, cwd, options, dir, isdir);
}
}
if !dir || path.is_dir() {
match partial.first() {
Some(base) if matches(base, &entry_name, options) => {
let partial = &partial[1..];
if !partial.is_empty() || isdir {
completions.extend(complete_rec(partial, &path, options, dir, isdir));
if entry_name.eq(base) {
break;
}
let mut built_path = cwd.to_path_buf();
for part in &built.parts {
built_path.push(part);
}
let Ok(result) = built_path.read_dir() else {
return completions;
};
for entry in result.filter_map(|e| e.ok()) {
let entry_name = entry.file_name().to_string_lossy().into_owned();
let entry_isdir = entry.path().is_dir();
let mut built = built.clone();
built.parts.push(entry_name.clone());
built.isdir = entry_isdir;
if !dir || entry_isdir {
match partial.split_first() {
Some((base, rest)) => {
if matches(base, &entry_name, options) {
if !rest.is_empty() || isdir {
completions
.extend(complete_rec(rest, &built, cwd, options, dir, isdir));
} else {
completions.push(path)
completions.push(built);
}
}
None => completions.push(path),
_ => {}
}
None => {
completions.push(built);
}
}
}
@ -48,33 +73,23 @@ fn complete_rec(
completions
}
#[derive(Debug)]
enum OriginalCwd {
None,
Home(PathBuf),
Some(PathBuf),
// referencing a single local file
Local(PathBuf),
Home,
Prefix(String),
}
impl OriginalCwd {
fn apply(&self, p: &Path) -> String {
let mut ret = match self {
Self::None => p.to_string_lossy().into_owned(),
Self::Some(base) => pathdiff::diff_paths(p, base)
.unwrap_or(p.to_path_buf())
.to_string_lossy()
.into_owned(),
Self::Home(home) => match p.strip_prefix(home) {
Ok(suffix) => format!("~{}{}", SEP, suffix.to_string_lossy()),
_ => p.to_string_lossy().into_owned(),
},
Self::Local(base) => Path::new(".")
.join(pathdiff::diff_paths(p, base).unwrap_or(p.to_path_buf()))
.to_string_lossy()
.into_owned(),
fn apply(&self, mut p: PathBuiltFromString) -> String {
match self {
Self::None => {}
Self::Home => p.parts.insert(0, "~".to_string()),
Self::Prefix(s) => p.parts.insert(0, s.clone()),
};
if p.is_dir() {
let mut ret = p.parts.join(MAIN_SEPARATOR_STR);
if p.isdir {
ret.push(SEP);
}
ret
@ -116,79 +131,72 @@ pub fn complete_item(
};
get_ls_colors(ls_colors_env_str)
});
let mut cwd = cwd_pathbuf.clone();
let mut prefix_len = 0;
let mut original_cwd = OriginalCwd::None;
let mut components_vec: Vec<Component> = Path::new(&partial).components().collect();
// Path components that end with a single "." get normalized away,
// so if the partial path ends in a literal "." we must add it back in manually
if partial.ends_with('.') && partial.len() > 1 {
components_vec.push(Component::Normal(OsStr::new(".")));
};
let mut components = components_vec.into_iter().peekable();
let mut cwd = match components.peek().cloned() {
let mut components = Path::new(&partial).components().peekable();
match components.peek().cloned() {
Some(c @ Component::Prefix(..)) => {
// windows only by definition
components.next();
if let Some(Component::RootDir) = components.peek().cloned() {
components.next();
};
[c, Component::RootDir].iter().collect()
cwd = [c, Component::RootDir].iter().collect();
prefix_len = c.as_os_str().len();
original_cwd = OriginalCwd::Prefix(c.as_os_str().to_string_lossy().into_owned());
}
Some(c @ Component::RootDir) => {
components.next();
PathBuf::from(c.as_os_str())
// This is kind of a hack. When joining an empty string with the rest,
// we add the slash automagically
cwd = PathBuf::from(c.as_os_str());
prefix_len = 1;
original_cwd = OriginalCwd::Prefix(String::new());
}
Some(Component::Normal(home)) if home.to_string_lossy() == "~" => {
components.next();
original_cwd = OriginalCwd::Home(home_dir().unwrap_or(cwd_pathbuf.clone()));
home_dir().unwrap_or(cwd_pathbuf)
}
Some(Component::CurDir) => {
components.next();
original_cwd = match components.peek().cloned() {
Some(Component::Normal(_)) | None => OriginalCwd::Local(cwd_pathbuf.clone()),
_ => OriginalCwd::Some(cwd_pathbuf.clone()),
};
cwd_pathbuf
}
_ => {
original_cwd = OriginalCwd::Some(cwd_pathbuf.clone());
cwd_pathbuf
cwd = home_dir().unwrap_or(cwd_pathbuf);
prefix_len = 1;
original_cwd = OriginalCwd::Home;
}
_ => {}
};
let mut partial = vec![];
let after_prefix = &partial[prefix_len..];
let partial: Vec<_> = after_prefix
.strip_prefix(is_separator)
.unwrap_or(after_prefix)
.split(is_separator)
.filter(|s| !s.is_empty())
.collect();
for component in components {
match component {
Component::Prefix(..) => unreachable!(),
Component::RootDir => unreachable!(),
Component::CurDir => {}
Component::ParentDir => {
if partial.pop().is_none() {
cwd.pop();
}
}
Component::Normal(c) => partial.push(c.to_string_lossy().into_owned()),
}
}
complete_rec(partial.as_slice(), &cwd, options, want_directory, isdir)
.into_iter()
.map(|p| {
let path = original_cwd.apply(&p);
let style = ls_colors.as_ref().map(|lsc| {
lsc.style_for_path_with_metadata(
&path,
std::fs::symlink_metadata(&path).ok().as_ref(),
)
.map(lscolors::Style::to_nu_ansi_term_style)
.unwrap_or_default()
});
(span, escape_path(path, want_directory), style)
})
.collect()
complete_rec(
partial.as_slice(),
&PathBuiltFromString::default(),
&cwd,
options,
want_directory,
isdir,
)
.into_iter()
.map(|p| {
let path = original_cwd.apply(p);
let style = ls_colors.as_ref().map(|lsc| {
lsc.style_for_path_with_metadata(
&path,
std::fs::symlink_metadata(expand_to_real_path(&path))
.ok()
.as_ref(),
)
.map(lscolors::Style::to_nu_ansi_term_style)
.unwrap_or_default()
});
(span, escape_path(path, want_directory), style)
})
.collect()
}
// Fix files or folders with quotes or hashes

View File

@ -6,14 +6,13 @@ use nu_engine::eval_call;
use nu_protocol::{
ast::{Argument, Call, Expr, Expression},
debugger::WithoutDebug,
engine::{EngineState, Stack, StateWorkingSet},
engine::{Stack, StateWorkingSet},
PipelineData, Span, Type, Value,
};
use nu_utils::IgnoreCaseExt;
use std::{collections::HashMap, sync::Arc};
use std::collections::HashMap;
pub struct CustomCompletion {
engine_state: Arc<EngineState>,
stack: Stack,
decl_id: usize,
line: String,
@ -21,10 +20,9 @@ pub struct CustomCompletion {
}
impl CustomCompletion {
pub fn new(engine_state: Arc<EngineState>, stack: Stack, decl_id: usize, line: String) -> Self {
pub fn new(stack: Stack, decl_id: usize, line: String) -> Self {
Self {
engine_state,
stack: stack.reset_out_dest().capture(),
stack,
decl_id,
line,
sort_by: SortBy::None,
@ -35,7 +33,8 @@ impl CustomCompletion {
impl Completer for CustomCompletion {
fn fetch(
&mut self,
_: &StateWorkingSet,
working_set: &StateWorkingSet,
_stack: &Stack,
prefix: Vec<u8>,
span: Span,
offset: usize,
@ -47,24 +46,22 @@ impl Completer for CustomCompletion {
// Call custom declaration
let result = eval_call::<WithoutDebug>(
&self.engine_state,
working_set.permanent_state,
&mut self.stack,
&Call {
decl_id: self.decl_id,
head: span,
arguments: vec![
Argument::Positional(Expression {
span: Span::unknown(),
ty: Type::String,
expr: Expr::String(self.line.clone()),
custom_completion: None,
}),
Argument::Positional(Expression {
span: Span::unknown(),
ty: Type::Int,
expr: Expr::Int(line_pos as i64),
custom_completion: None,
}),
Argument::Positional(Expression::new_unknown(
Expr::String(self.line.clone()),
Span::unknown(),
Type::String,
)),
Argument::Positional(Expression::new_unknown(
Expr::Int(line_pos as i64),
Span::unknown(),
Type::Int,
)),
],
parser_info: HashMap::new(),
},
@ -75,55 +72,53 @@ impl Completer for CustomCompletion {
// Parse result
let suggestions = result
.map(|pd| {
let value = pd.into_value(span);
match &value {
Value::Record { val, .. } => {
let completions = val
.get("completions")
.and_then(|val| {
val.as_list()
.ok()
.map(|it| map_value_completions(it.iter(), span, offset))
})
.unwrap_or_default();
let options = val.get("options");
.and_then(|data| data.into_value(span))
.map(|value| match &value {
Value::Record { val, .. } => {
let completions = val
.get("completions")
.and_then(|val| {
val.as_list()
.ok()
.map(|it| map_value_completions(it.iter(), span, offset))
})
.unwrap_or_default();
let options = val.get("options");
if let Some(Value::Record { val: options, .. }) = &options {
let should_sort = options
.get("sort")
.and_then(|val| val.as_bool().ok())
.unwrap_or(false);
if let Some(Value::Record { val: options, .. }) = &options {
let should_sort = options
.get("sort")
.and_then(|val| val.as_bool().ok())
.unwrap_or(false);
if should_sort {
self.sort_by = SortBy::Ascending;
}
custom_completion_options = Some(CompletionOptions {
case_sensitive: options
.get("case_sensitive")
.and_then(|val| val.as_bool().ok())
.unwrap_or(true),
positional: options
.get("positional")
.and_then(|val| val.as_bool().ok())
.unwrap_or(true),
match_algorithm: match options.get("completion_algorithm") {
Some(option) => option
.coerce_string()
.ok()
.and_then(|option| option.try_into().ok())
.unwrap_or(MatchAlgorithm::Prefix),
None => completion_options.match_algorithm,
},
});
if should_sort {
self.sort_by = SortBy::Ascending;
}
completions
custom_completion_options = Some(CompletionOptions {
case_sensitive: options
.get("case_sensitive")
.and_then(|val| val.as_bool().ok())
.unwrap_or(true),
positional: options
.get("positional")
.and_then(|val| val.as_bool().ok())
.unwrap_or(true),
match_algorithm: match options.get("completion_algorithm") {
Some(option) => option
.coerce_string()
.ok()
.and_then(|option| option.try_into().ok())
.unwrap_or(MatchAlgorithm::Prefix),
None => completion_options.match_algorithm,
},
});
}
Value::List { vals, .. } => map_value_completions(vals.iter(), span, offset),
_ => vec![],
completions
}
Value::List { vals, .. } => map_value_completions(vals.iter(), span, offset),
_ => vec![],
})
.unwrap_or_default();

View File

@ -8,25 +8,16 @@ use nu_protocol::{
levenshtein_distance, Span,
};
use reedline::Suggestion;
use std::{
path::{Path, MAIN_SEPARATOR as SEP},
sync::Arc,
};
use std::path::{Path, MAIN_SEPARATOR as SEP};
use super::SemanticSuggestion;
#[derive(Clone)]
pub struct DirectoryCompletion {
engine_state: Arc<EngineState>,
stack: Stack,
}
#[derive(Clone, Default)]
pub struct DirectoryCompletion {}
impl DirectoryCompletion {
pub fn new(engine_state: Arc<EngineState>, stack: Stack) -> Self {
Self {
engine_state,
stack,
}
pub fn new() -> Self {
Self::default()
}
}
@ -34,22 +25,24 @@ impl Completer for DirectoryCompletion {
fn fetch(
&mut self,
working_set: &StateWorkingSet,
stack: &Stack,
prefix: Vec<u8>,
span: Span,
offset: usize,
_: usize,
_pos: usize,
options: &CompletionOptions,
) -> Vec<SemanticSuggestion> {
let AdjustView { prefix, span, .. } = adjust_if_intermediate(&prefix, working_set, span);
// Filter only the folders
#[allow(deprecated)]
let output: Vec<_> = directory_completion(
span,
&prefix,
&self.engine_state.current_work_dir(),
&working_set.permanent_state.current_work_dir(),
options,
self.engine_state.as_ref(),
&self.stack,
working_set.permanent_state,
stack,
)
.into_iter()
.map(move |x| SemanticSuggestion {

View File

@ -1,39 +1,31 @@
use crate::completions::{file_path_completion, Completer, CompletionOptions, SortBy};
use nu_protocol::{
engine::{EngineState, Stack, StateWorkingSet},
engine::{Stack, StateWorkingSet},
Span,
};
use reedline::Suggestion;
use std::{
path::{is_separator, Path, MAIN_SEPARATOR as SEP, MAIN_SEPARATOR_STR},
sync::Arc,
};
use std::path::{is_separator, Path, MAIN_SEPARATOR as SEP, MAIN_SEPARATOR_STR};
use super::SemanticSuggestion;
#[derive(Clone)]
pub struct DotNuCompletion {
engine_state: Arc<EngineState>,
stack: Stack,
}
#[derive(Clone, Default)]
pub struct DotNuCompletion {}
impl DotNuCompletion {
pub fn new(engine_state: Arc<EngineState>, stack: Stack) -> Self {
Self {
engine_state,
stack,
}
pub fn new() -> Self {
Self::default()
}
}
impl Completer for DotNuCompletion {
fn fetch(
&mut self,
_: &StateWorkingSet,
working_set: &StateWorkingSet,
stack: &Stack,
prefix: Vec<u8>,
span: Span,
offset: usize,
_: usize,
_pos: usize,
options: &CompletionOptions,
) -> Vec<SemanticSuggestion> {
let prefix_str = String::from_utf8_lossy(&prefix).replace('`', "");
@ -49,26 +41,25 @@ impl Completer for DotNuCompletion {
let mut is_current_folder = false;
// Fetch the lib dirs
let lib_dirs: Vec<String> =
if let Some(lib_dirs) = self.engine_state.get_env_var("NU_LIB_DIRS") {
lib_dirs
.as_list()
.into_iter()
.flat_map(|it| {
it.iter().map(|x| {
x.to_path()
.expect("internal error: failed to convert lib path")
})
let lib_dirs: Vec<String> = if let Some(lib_dirs) = working_set.get_env_var("NU_LIB_DIRS") {
lib_dirs
.as_list()
.into_iter()
.flat_map(|it| {
it.iter().map(|x| {
x.to_path()
.expect("internal error: failed to convert lib path")
})
.map(|it| {
it.into_os_string()
.into_string()
.expect("internal error: failed to convert OS path")
})
.collect()
} else {
vec![]
};
})
.map(|it| {
it.into_os_string()
.into_string()
.expect("internal error: failed to convert OS path")
})
.collect()
} else {
vec![]
};
// Check if the base_dir is a folder
// rsplit_once removes the separator
@ -84,7 +75,8 @@ impl Completer for DotNuCompletion {
partial = base_dir_partial;
} else {
// Fetch the current folder
let current_folder = self.engine_state.current_work_dir();
#[allow(deprecated)]
let current_folder = working_set.permanent_state.current_work_dir();
is_current_folder = true;
// Add the current folder and the lib dirs into the
@ -103,8 +95,8 @@ impl Completer for DotNuCompletion {
&partial,
&search_dir,
options,
self.engine_state.as_ref(),
&self.stack,
working_set.permanent_state,
stack,
);
completions
.into_iter()

View File

@ -9,25 +9,16 @@ use nu_protocol::{
};
use nu_utils::IgnoreCaseExt;
use reedline::Suggestion;
use std::{
path::{Path, MAIN_SEPARATOR as SEP},
sync::Arc,
};
use std::path::{Path, MAIN_SEPARATOR as SEP};
use super::SemanticSuggestion;
#[derive(Clone)]
pub struct FileCompletion {
engine_state: Arc<EngineState>,
stack: Stack,
}
#[derive(Clone, Default)]
pub struct FileCompletion {}
impl FileCompletion {
pub fn new(engine_state: Arc<EngineState>, stack: Stack) -> Self {
Self {
engine_state,
stack,
}
pub fn new() -> Self {
Self::default()
}
}
@ -35,10 +26,11 @@ impl Completer for FileCompletion {
fn fetch(
&mut self,
working_set: &StateWorkingSet,
stack: &Stack,
prefix: Vec<u8>,
span: Span,
offset: usize,
_: usize,
_pos: usize,
options: &CompletionOptions,
) -> Vec<SemanticSuggestion> {
let AdjustView {
@ -47,14 +39,15 @@ impl Completer for FileCompletion {
readjusted,
} = adjust_if_intermediate(&prefix, working_set, span);
#[allow(deprecated)]
let output: Vec<_> = complete_item(
readjusted,
span,
&prefix,
&self.engine_state.current_work_dir(),
&working_set.permanent_state.current_work_dir(),
options,
self.engine_state.as_ref(),
&self.stack,
working_set.permanent_state,
stack,
)
.into_iter()
.map(move |x| SemanticSuggestion {

View File

@ -1,7 +1,7 @@
use crate::completions::{Completer, CompletionOptions};
use nu_protocol::{
ast::{Expr, Expression},
engine::StateWorkingSet,
engine::{Stack, StateWorkingSet},
Span,
};
use reedline::Suggestion;
@ -23,10 +23,11 @@ impl Completer for FlagCompletion {
fn fetch(
&mut self,
working_set: &StateWorkingSet,
_stack: &Stack,
prefix: Vec<u8>,
span: Span,
offset: usize,
_: usize,
_pos: usize,
options: &CompletionOptions,
) -> Vec<SemanticSuggestion> {
// Check if it's a flag

View File

@ -3,30 +3,20 @@ use crate::completions::{
};
use nu_engine::{column::get_columns, eval_variable};
use nu_protocol::{
engine::{EngineState, Stack, StateWorkingSet},
engine::{Stack, StateWorkingSet},
Span, Value,
};
use reedline::Suggestion;
use std::{str, sync::Arc};
use std::str;
#[derive(Clone)]
pub struct VariableCompletion {
engine_state: Arc<EngineState>, // TODO: Is engine state necessary? It's already a part of working set in fetch()
stack: Stack,
var_context: (Vec<u8>, Vec<Vec<u8>>), // tuple with $var and the sublevels (.b.c.d)
}
impl VariableCompletion {
pub fn new(
engine_state: Arc<EngineState>,
stack: Stack,
var_context: (Vec<u8>, Vec<Vec<u8>>),
) -> Self {
Self {
engine_state,
stack,
var_context,
}
pub fn new(var_context: (Vec<u8>, Vec<Vec<u8>>)) -> Self {
Self { var_context }
}
}
@ -34,10 +24,11 @@ impl Completer for VariableCompletion {
fn fetch(
&mut self,
working_set: &StateWorkingSet,
stack: &Stack,
prefix: Vec<u8>,
span: Span,
offset: usize,
_: usize,
_pos: usize,
options: &CompletionOptions,
) -> Vec<SemanticSuggestion> {
let mut output = vec![];
@ -54,7 +45,7 @@ impl Completer for VariableCompletion {
if !var_str.is_empty() {
// Completion for $env.<tab>
if var_str == "$env" {
let env_vars = self.stack.get_env_vars(&self.engine_state);
let env_vars = stack.get_env_vars(working_set.permanent_state);
// Return nested values
if sublevels_count > 0 {
@ -110,8 +101,8 @@ impl Completer for VariableCompletion {
if var_str == "$nu" {
// Eval nu var
if let Ok(nuval) = eval_variable(
&self.engine_state,
&self.stack,
working_set.permanent_state,
stack,
nu_protocol::NU_VARIABLE_ID,
nu_protocol::Span::new(current_span.start, current_span.end),
) {
@ -133,7 +124,7 @@ impl Completer for VariableCompletion {
// Completion other variable types
if let Some(var_id) = var_id {
// Extract the variable value from the stack
let var = self.stack.get_var(var_id, Span::new(span.start, span.end));
let var = stack.get_var(var_id, Span::new(span.start, span.end));
// If the value exists and it's of type Record
if let Ok(value) = var {
@ -207,7 +198,11 @@ impl Completer for VariableCompletion {
// Permanent state vars
// for scope in &self.engine_state.scope {
for overlay_frame in self.engine_state.active_overlays(&removed_overlays).rev() {
for overlay_frame in working_set
.permanent_state
.active_overlays(&removed_overlays)
.rev()
{
for v in &overlay_frame.vars {
if options.match_algorithm.matches_u8_insensitive(
options.case_sensitive,
@ -267,24 +262,6 @@ fn nested_suggestions(
output
}
Value::LazyRecord { val, .. } => {
// Add all the columns as completion
for column_name in val.column_names() {
output.push(SemanticSuggestion {
suggestion: Suggestion {
value: column_name.to_string(),
description: None,
style: None,
extra: None,
span: current_span,
append_whitespace: false,
},
kind: Some(kind.clone()),
});
}
output
}
Value::List { vals, .. } => {
for column_name in get_columns(vals.as_slice()) {
output.push(SemanticSuggestion {
@ -321,17 +298,6 @@ fn recursive_value(val: &Value, sublevels: &[Vec<u8>]) -> Result<Value, Span> {
Err(span)
}
}
Value::LazyRecord { val, .. } => {
for col in val.column_names() {
if col.as_bytes() == *sublevel {
let val = val.get_column_value(col).map_err(|_| span)?;
return recursive_value(&val, next_sublevels);
}
}
// Current sublevel value not found
Err(span)
}
Value::List { vals, .. } => {
for col in get_columns(vals.as_slice()) {
if col.as_bytes() == *sublevel {

View File

@ -1,12 +1,12 @@
use crate::util::eval_source;
#[cfg(feature = "plugin")]
use nu_path::canonicalize_with;
use nu_protocol::{
engine::{EngineState, Stack, StateWorkingSet},
report_error, HistoryFileFormat, PipelineData,
};
#[cfg(feature = "plugin")]
use nu_protocol::{ParseError, PluginRegistryFile, Spanned};
use nu_protocol::{engine::StateWorkingSet, report_error, ParseError, PluginRegistryFile, Spanned};
use nu_protocol::{
engine::{EngineState, Stack},
report_error_new, HistoryFileFormat, PipelineData,
};
#[cfg(feature = "plugin")]
use nu_utils::utils::perf;
use std::path::PathBuf;
@ -25,10 +25,9 @@ pub fn read_plugin_file(
plugin_file: Option<Spanned<String>>,
storage_path: &str,
) {
use nu_protocol::ShellError;
use std::path::Path;
use nu_protocol::{report_error_new, ShellError};
let span = plugin_file.as_ref().map(|s| s.span);
// Check and warn + abort if this is a .nu plugin file
@ -177,35 +176,36 @@ pub fn add_plugin_file(
use std::path::Path;
let working_set = StateWorkingSet::new(engine_state);
let cwd = working_set.get_cwd();
if let Some(plugin_file) = plugin_file {
let path = Path::new(&plugin_file.item);
let path_dir = path.parent().unwrap_or(path);
// Just try to canonicalize the directory of the plugin file first.
if let Ok(path_dir) = canonicalize_with(path_dir, &cwd) {
// Try to canonicalize the actual filename, but it's ok if that fails. The file doesn't
// have to exist.
let path = path_dir.join(path.file_name().unwrap_or(path.as_os_str()));
let path = canonicalize_with(&path, &cwd).unwrap_or(path);
engine_state.plugin_path = Some(path)
} else {
// It's an error if the directory for the plugin file doesn't exist.
report_error(
&working_set,
&ParseError::FileNotFound(
path_dir.to_string_lossy().into_owned(),
plugin_file.span,
),
);
if let Ok(cwd) = engine_state.cwd_as_string(None) {
if let Some(plugin_file) = plugin_file {
let path = Path::new(&plugin_file.item);
let path_dir = path.parent().unwrap_or(path);
// Just try to canonicalize the directory of the plugin file first.
if let Ok(path_dir) = canonicalize_with(path_dir, &cwd) {
// Try to canonicalize the actual filename, but it's ok if that fails. The file doesn't
// have to exist.
let path = path_dir.join(path.file_name().unwrap_or(path.as_os_str()));
let path = canonicalize_with(&path, &cwd).unwrap_or(path);
engine_state.plugin_path = Some(path)
} else {
// It's an error if the directory for the plugin file doesn't exist.
report_error(
&working_set,
&ParseError::FileNotFound(
path_dir.to_string_lossy().into_owned(),
plugin_file.span,
),
);
}
} else if let Some(mut plugin_path) = nu_path::config_dir() {
// Path to store plugins signatures
plugin_path.push(storage_path);
let mut plugin_path = canonicalize_with(&plugin_path, &cwd).unwrap_or(plugin_path);
plugin_path.push(PLUGIN_FILE);
let plugin_path = canonicalize_with(&plugin_path, &cwd).unwrap_or(plugin_path);
engine_state.plugin_path = Some(plugin_path);
}
} else if let Some(mut plugin_path) = nu_path::config_dir() {
// Path to store plugins signatures
plugin_path.push(storage_path);
let mut plugin_path = canonicalize_with(&plugin_path, &cwd).unwrap_or(plugin_path);
plugin_path.push(PLUGIN_FILE);
let plugin_path = canonicalize_with(&plugin_path, &cwd).unwrap_or(plugin_path);
engine_state.plugin_path = Some(plugin_path);
}
}
@ -235,16 +235,14 @@ pub fn eval_config_contents(
engine_state.file = prev_file;
// Merge the environment in case env vars changed in the config
match nu_engine::env::current_dir(engine_state, stack) {
match engine_state.cwd(Some(stack)) {
Ok(cwd) => {
if let Err(e) = engine_state.merge_env(stack, cwd) {
let working_set = StateWorkingSet::new(engine_state);
report_error(&working_set, &e);
report_error_new(engine_state, &e);
}
}
Err(e) => {
let working_set = StateWorkingSet::new(engine_state);
report_error(&working_set, &e);
report_error_new(engine_state, &e);
}
}
}
@ -265,14 +263,16 @@ pub(crate) fn get_history_path(storage_path: &str, mode: HistoryFileFormat) -> O
#[cfg(feature = "plugin")]
pub fn migrate_old_plugin_file(engine_state: &EngineState, storage_path: &str) -> bool {
use nu_protocol::{
report_error_new, PluginExample, PluginIdentity, PluginRegistryItem,
PluginRegistryItemData, PluginSignature, ShellError,
PluginExample, PluginIdentity, PluginRegistryItem, PluginRegistryItemData, PluginSignature,
ShellError,
};
use std::collections::BTreeMap;
let start_time = std::time::Instant::now();
let cwd = engine_state.current_work_dir();
let Ok(cwd) = engine_state.cwd_as_string(None) else {
return false;
};
let Some(config_dir) = nu_path::config_dir().and_then(|mut dir| {
dir.push(storage_path);
@ -306,14 +306,15 @@ pub fn migrate_old_plugin_file(engine_state: &EngineState, storage_path: &str) -
let mut engine_state = engine_state.clone();
let mut stack = Stack::new();
if !eval_source(
if eval_source(
&mut engine_state,
&mut stack,
&old_contents,
&old_plugin_file_path.to_string_lossy(),
PipelineData::Empty,
false,
) {
) != 0
{
return false;
}
@ -343,7 +344,10 @@ pub fn migrate_old_plugin_file(engine_state: &EngineState, storage_path: &str) -
name: identity.name().to_owned(),
filename: identity.filename().to_owned(),
shell: identity.shell().map(|p| p.to_owned()),
data: PluginRegistryItemData::Valid { commands },
data: PluginRegistryItemData::Valid {
metadata: Default::default(),
commands,
},
});
}

View File

@ -1,12 +1,19 @@
use log::info;
use miette::Result;
use nu_engine::{convert_env_values, eval_block};
use nu_parser::parse;
use nu_protocol::{
debugger::WithoutDebug,
engine::{EngineState, Stack, StateWorkingSet},
report_error, PipelineData, Spanned, Value,
report_error, PipelineData, ShellError, Spanned, Value,
};
use std::sync::Arc;
#[derive(Default)]
pub struct EvaluateCommandsOpts {
pub table_mode: Option<Value>,
pub error_style: Option<Value>,
pub no_newline: bool,
}
/// Run a command (or commands) given to us by the user
pub fn evaluate_commands(
@ -14,16 +21,35 @@ pub fn evaluate_commands(
engine_state: &mut EngineState,
stack: &mut Stack,
input: PipelineData,
table_mode: Option<Value>,
no_newline: bool,
) -> Result<Option<i64>> {
// Translate environment variables from Strings to Values
if let Some(e) = convert_env_values(engine_state, stack) {
let working_set = StateWorkingSet::new(engine_state);
report_error(&working_set, &e);
std::process::exit(1);
opts: EvaluateCommandsOpts,
) -> Result<(), ShellError> {
let EvaluateCommandsOpts {
table_mode,
error_style,
no_newline,
} = opts;
// Handle the configured error style early
if let Some(e_style) = error_style {
match e_style.coerce_str()?.parse() {
Ok(e_style) => {
Arc::make_mut(&mut engine_state.config).error_style = e_style;
}
Err(err) => {
return Err(ShellError::GenericError {
error: "Invalid value for `--error-style`".into(),
msg: err.into(),
span: Some(e_style.span()),
help: None,
inner: vec![],
});
}
}
}
// Translate environment variables from Strings to Values
convert_env_values(engine_state, stack)?;
// Parse the source code
let (block, delta) = {
if let Some(ref t_mode) = table_mode {
@ -41,7 +67,6 @@ pub fn evaluate_commands(
if let Some(err) = working_set.parse_errors.first() {
report_error(&working_set, err);
std::process::exit(1);
}
@ -49,35 +74,27 @@ pub fn evaluate_commands(
};
// Update permanent state
if let Err(err) = engine_state.merge_delta(delta) {
let working_set = StateWorkingSet::new(engine_state);
report_error(&working_set, &err);
}
engine_state.merge_delta(delta)?;
// Run the block
let exit_code = match eval_block::<WithoutDebug>(engine_state, stack, &block, input) {
Ok(pipeline_data) => {
let mut config = engine_state.get_config().clone();
if let Some(t_mode) = table_mode {
config.table_mode = t_mode.coerce_str()?.parse().unwrap_or_default();
}
crate::eval_file::print_table_or_error(
engine_state,
stack,
pipeline_data,
&mut config,
no_newline,
)
}
Err(err) => {
let working_set = StateWorkingSet::new(engine_state);
let pipeline = eval_block::<WithoutDebug>(engine_state, stack, &block, input)?;
report_error(&working_set, &err);
std::process::exit(1);
if let PipelineData::Value(Value::Error { error, .. }, ..) = pipeline {
return Err(*error);
}
if let Some(t_mode) = table_mode {
Arc::make_mut(&mut engine_state.config).table_mode =
t_mode.coerce_str()?.parse().unwrap_or_default();
}
if let Some(status) = pipeline.print(engine_state, stack, no_newline, false)? {
if status.code() != 0 {
std::process::exit(status.code())
}
};
}
info!("evaluate {}:{}:{}", file!(), line!(), column!());
Ok(exit_code)
Ok(())
}

View File

@ -1,15 +1,14 @@
use crate::util::eval_source;
use log::{info, trace};
use miette::{IntoDiagnostic, Result};
use nu_engine::{convert_env_values, current_dir, eval_block};
use nu_engine::{convert_env_values, eval_block};
use nu_parser::parse;
use nu_path::canonicalize_with;
use nu_protocol::{
debugger::WithoutDebug,
engine::{EngineState, Stack, StateWorkingSet},
report_error, Config, PipelineData, ShellError, Span, Value,
report_error, PipelineData, ShellError, Span, Value,
};
use std::{io::Write, sync::Arc};
use std::sync::Arc;
/// Entry point for evaluating a file.
///
@ -21,73 +20,40 @@ pub fn evaluate_file(
engine_state: &mut EngineState,
stack: &mut Stack,
input: PipelineData,
) -> Result<()> {
) -> Result<(), ShellError> {
// Convert environment variables from Strings to Values and store them in the engine state.
if let Some(e) = convert_env_values(engine_state, stack) {
let working_set = StateWorkingSet::new(engine_state);
report_error(&working_set, &e);
std::process::exit(1);
}
convert_env_values(engine_state, stack)?;
let cwd = current_dir(engine_state, stack)?;
let cwd = engine_state.cwd_as_string(Some(stack))?;
let file_path = canonicalize_with(&path, cwd).unwrap_or_else(|e| {
let working_set = StateWorkingSet::new(engine_state);
report_error(
&working_set,
&ShellError::FileNotFoundCustom {
msg: format!("Could not access file '{}': {:?}", path, e.to_string()),
span: Span::unknown(),
},
);
std::process::exit(1);
});
let file_path =
canonicalize_with(&path, cwd).map_err(|err| ShellError::FileNotFoundCustom {
msg: format!("Could not access file '{path}': {err}"),
span: Span::unknown(),
})?;
let file_path_str = file_path.to_str().unwrap_or_else(|| {
let working_set = StateWorkingSet::new(engine_state);
report_error(
&working_set,
&ShellError::NonUtf8Custom {
msg: format!(
"Input file name '{}' is not valid UTF8",
file_path.to_string_lossy()
),
span: Span::unknown(),
},
);
std::process::exit(1);
});
let file_path_str = file_path
.to_str()
.ok_or_else(|| ShellError::NonUtf8Custom {
msg: format!(
"Input file name '{}' is not valid UTF8",
file_path.to_string_lossy()
),
span: Span::unknown(),
})?;
let file = std::fs::read(&file_path)
.into_diagnostic()
.unwrap_or_else(|e| {
let working_set = StateWorkingSet::new(engine_state);
report_error(
&working_set,
&ShellError::FileNotFoundCustom {
msg: format!(
"Could not read file '{}': {:?}",
file_path_str,
e.to_string()
),
span: Span::unknown(),
},
);
std::process::exit(1);
});
let file = std::fs::read(&file_path).map_err(|err| ShellError::FileNotFoundCustom {
msg: format!("Could not read file '{file_path_str}': {err}"),
span: Span::unknown(),
})?;
engine_state.file = Some(file_path.clone());
let parent = file_path.parent().unwrap_or_else(|| {
let working_set = StateWorkingSet::new(engine_state);
report_error(
&working_set,
&ShellError::FileNotFoundCustom {
msg: format!("The file path '{file_path_str}' does not have a parent"),
span: Span::unknown(),
},
);
std::process::exit(1);
});
let parent = file_path
.parent()
.ok_or_else(|| ShellError::FileNotFoundCustom {
msg: format!("The file path '{file_path_str}' does not have a parent"),
span: Span::unknown(),
})?;
stack.add_env_var(
"FILE_PWD".to_string(),
@ -127,119 +93,48 @@ pub fn evaluate_file(
}
// Merge the changes into the engine state.
engine_state
.merge_delta(working_set.delta)
.expect("merging delta into engine_state should succeed");
engine_state.merge_delta(working_set.delta)?;
// Check if the file contains a main command.
if engine_state.find_decl(b"main", &[]).is_some() {
let exit_code = if engine_state.find_decl(b"main", &[]).is_some() {
// Evaluate the file, but don't run main yet.
let pipeline_data =
eval_block::<WithoutDebug>(engine_state, stack, &block, PipelineData::empty());
let pipeline_data = match pipeline_data {
Err(ShellError::Return { .. }) => {
// Allow early return before main is run.
return Ok(());
}
x => x,
}
.unwrap_or_else(|e| {
let working_set = StateWorkingSet::new(engine_state);
report_error(&working_set, &e);
std::process::exit(1);
});
// Print the pipeline output of the file.
// The pipeline output of a file is the pipeline output of its last command.
let result = pipeline_data.print(engine_state, stack, true, false);
match result {
Err(err) => {
let working_set = StateWorkingSet::new(engine_state);
report_error(&working_set, &err);
std::process::exit(1);
}
Ok(exit_code) => {
if exit_code != 0 {
std::process::exit(exit_code as i32);
let pipeline =
match eval_block::<WithoutDebug>(engine_state, stack, &block, PipelineData::empty()) {
Ok(data) => data,
Err(ShellError::Return { .. }) => {
// Allow early return before main is run.
return Ok(());
}
Err(err) => return Err(err),
};
// Print the pipeline output of the last command of the file.
if let Some(status) = pipeline.print(engine_state, stack, true, false)? {
if status.code() != 0 {
std::process::exit(status.code())
}
}
// Invoke the main command with arguments.
// Arguments with whitespaces are quoted, thus can be safely concatenated by whitespace.
let args = format!("main {}", args.join(" "));
if !eval_source(
eval_source(
engine_state,
stack,
args.as_bytes(),
"<commandline>",
input,
true,
) {
std::process::exit(1);
}
} else if !eval_source(engine_state, stack, &file, file_path_str, input, true) {
std::process::exit(1);
)
} else {
eval_source(engine_state, stack, &file, file_path_str, input, true)
};
if exit_code != 0 {
std::process::exit(exit_code)
}
info!("evaluate {}:{}:{}", file!(), line!(), column!());
Ok(())
}
pub(crate) fn print_table_or_error(
engine_state: &mut EngineState,
stack: &mut Stack,
mut pipeline_data: PipelineData,
config: &mut Config,
no_newline: bool,
) -> Option<i64> {
let exit_code = match &mut pipeline_data {
PipelineData::ExternalStream { exit_code, .. } => exit_code.take(),
_ => None,
};
// Change the engine_state config to use the passed in configuration
engine_state.set_config(config.clone());
if let PipelineData::Value(Value::Error { error, .. }, ..) = &pipeline_data {
let working_set = StateWorkingSet::new(engine_state);
report_error(&working_set, &**error);
std::process::exit(1);
}
// We don't need to do anything special to print a table because print() handles it
print_or_exit(pipeline_data, engine_state, stack, no_newline);
// Make sure everything has finished
if let Some(exit_code) = exit_code {
let mut exit_code: Vec<_> = exit_code.into_iter().collect();
exit_code
.pop()
.and_then(|last_exit_code| match last_exit_code {
Value::Int { val: code, .. } => Some(code),
_ => None,
})
} else {
None
}
}
fn print_or_exit(
pipeline_data: PipelineData,
engine_state: &EngineState,
stack: &mut Stack,
no_newline: bool,
) {
let result = pipeline_data.print(engine_state, stack, no_newline, false);
let _ = std::io::stdout().flush();
let _ = std::io::stderr().flush();
if let Err(error) = result {
let working_set = StateWorkingSet::new(engine_state);
report_error(&working_set, &error);
let _ = std::io::stderr().flush();
std::process::exit(1);
}
}

View File

@ -17,7 +17,7 @@ mod validation;
pub use commands::add_cli_context;
pub use completions::{FileCompletion, NuCompleter, SemanticSuggestion, SuggestionKind};
pub use config_files::eval_config_contents;
pub use eval_cmds::evaluate_commands;
pub use eval_cmds::{evaluate_commands, EvaluateCommandsOpts};
pub use eval_file::evaluate_file;
pub use menus::NuHelpCompleter;
pub use nu_cmd_base::util::get_init_cwd;

View File

@ -12,50 +12,49 @@ impl NuHelpCompleter {
}
fn completion_helper(&self, line: &str, pos: usize) -> Vec<Suggestion> {
let full_commands = self.0.get_signatures_with_examples(false);
let folded_line = line.to_folded_case();
//Vec<(Signature, Vec<Example>, bool, bool)> {
let mut commands = full_commands
.iter()
.filter(|(sig, _, _, _, _)| {
sig.name.to_folded_case().contains(&folded_line)
|| sig.usage.to_folded_case().contains(&folded_line)
|| sig
.search_terms
.iter()
let mut commands = self
.0
.get_decls_sorted(false)
.into_iter()
.filter_map(|(_, decl_id)| {
let decl = self.0.get_decl(decl_id);
(decl.name().to_folded_case().contains(&folded_line)
|| decl.usage().to_folded_case().contains(&folded_line)
|| decl
.search_terms()
.into_iter()
.any(|term| term.to_folded_case().contains(&folded_line))
|| sig.extra_usage.to_folded_case().contains(&folded_line)
|| decl.extra_usage().to_folded_case().contains(&folded_line))
.then_some(decl)
})
.collect::<Vec<_>>();
commands.sort_by(|(a, _, _, _, _), (b, _, _, _, _)| {
let a_distance = levenshtein_distance(line, &a.name);
let b_distance = levenshtein_distance(line, &b.name);
a_distance.cmp(&b_distance)
});
commands.sort_by_cached_key(|decl| levenshtein_distance(line, decl.name()));
commands
.into_iter()
.map(|(sig, examples, _, _, _)| {
.map(|decl| {
let mut long_desc = String::new();
let usage = &sig.usage;
let usage = decl.usage();
if !usage.is_empty() {
long_desc.push_str(usage);
long_desc.push_str("\r\n\r\n");
}
let extra_usage = &sig.extra_usage;
let extra_usage = decl.extra_usage();
if !extra_usage.is_empty() {
long_desc.push_str(extra_usage);
long_desc.push_str("\r\n\r\n");
}
let sig = decl.signature();
let _ = write!(long_desc, "Usage:\r\n > {}\r\n", sig.call_signature());
if !sig.named.is_empty() {
long_desc.push_str(&get_flags_section(Some(&*self.0.clone()), sig, |v| {
long_desc.push_str(&get_flags_section(Some(&*self.0.clone()), &sig, |v| {
v.to_parsable_string(", ", &self.0.config)
}))
}
@ -93,13 +92,14 @@ impl NuHelpCompleter {
}
}
let extra: Vec<String> = examples
let extra: Vec<String> = decl
.examples()
.iter()
.map(|example| example.example.replace('\n', "\r\n"))
.collect();
Suggestion {
value: sig.name.clone(),
value: decl.name().into(),
description: Some(long_desc),
style: None,
extra: Some(extra),

View File

@ -59,8 +59,7 @@ impl Completer for NuMenuCompleter {
let res = eval_block::<WithoutDebug>(&self.engine_state, &mut self.stack, block, input);
if let Ok(values) = res {
let values = values.into_value(self.span);
if let Ok(values) = res.and_then(|data| data.into_value(self.span)) {
convert_to_suggestions(values, line, pos, self.only_buffer_difference)
} else {
Vec::new()

View File

@ -1,4 +1,10 @@
use crate::prompt_update::{POST_PROMPT_MARKER, PRE_PROMPT_MARKER};
use crate::prompt_update::{
POST_PROMPT_MARKER, PRE_PROMPT_MARKER, VSCODE_POST_PROMPT_MARKER, VSCODE_PRE_PROMPT_MARKER,
};
use nu_protocol::{
engine::{EngineState, Stack},
Value,
};
#[cfg(windows)]
use nu_utils::enable_vt_processing;
use reedline::{
@ -10,7 +16,8 @@ use std::borrow::Cow;
/// Nushell prompt definition
#[derive(Clone)]
pub struct NushellPrompt {
shell_integration: bool,
shell_integration_osc133: bool,
shell_integration_osc633: bool,
left_prompt_string: Option<String>,
right_prompt_string: Option<String>,
default_prompt_indicator: Option<String>,
@ -18,12 +25,20 @@ pub struct NushellPrompt {
default_vi_normal_prompt_indicator: Option<String>,
default_multiline_indicator: Option<String>,
render_right_prompt_on_last_line: bool,
engine_state: EngineState,
stack: Stack,
}
impl NushellPrompt {
pub fn new(shell_integration: bool) -> NushellPrompt {
pub fn new(
shell_integration_osc133: bool,
shell_integration_osc633: bool,
engine_state: EngineState,
stack: Stack,
) -> NushellPrompt {
NushellPrompt {
shell_integration,
shell_integration_osc133,
shell_integration_osc633,
left_prompt_string: None,
right_prompt_string: None,
default_prompt_indicator: None,
@ -31,6 +46,8 @@ impl NushellPrompt {
default_vi_normal_prompt_indicator: None,
default_multiline_indicator: None,
render_right_prompt_on_last_line: false,
engine_state,
stack,
}
}
@ -106,7 +123,19 @@ impl Prompt for NushellPrompt {
.to_string()
.replace('\n', "\r\n");
if self.shell_integration {
if self.shell_integration_osc633 {
if self.stack.get_env_var(&self.engine_state, "TERM_PROGRAM")
== Some(Value::test_string("vscode"))
{
// We're in vscode and we have osc633 enabled
format!("{VSCODE_PRE_PROMPT_MARKER}{prompt}{VSCODE_POST_PROMPT_MARKER}").into()
} else if self.shell_integration_osc133 {
// If we're in VSCode but we don't find the env var, but we have osc133 set, then use it
format!("{PRE_PROMPT_MARKER}{prompt}{POST_PROMPT_MARKER}").into()
} else {
prompt.into()
}
} else if self.shell_integration_osc133 {
format!("{PRE_PROMPT_MARKER}{prompt}{POST_PROMPT_MARKER}").into()
} else {
prompt.into()

View File

@ -2,8 +2,8 @@ use crate::NushellPrompt;
use log::trace;
use nu_engine::ClosureEvalOnce;
use nu_protocol::{
engine::{EngineState, Stack, StateWorkingSet},
report_error, Config, PipelineData, Value,
engine::{EngineState, Stack},
report_error_new, Config, PipelineData, Value,
};
use reedline::Prompt;
@ -23,10 +23,37 @@ pub(crate) const TRANSIENT_PROMPT_INDICATOR_VI_NORMAL: &str =
"TRANSIENT_PROMPT_INDICATOR_VI_NORMAL";
pub(crate) const TRANSIENT_PROMPT_MULTILINE_INDICATOR: &str =
"TRANSIENT_PROMPT_MULTILINE_INDICATOR";
// Store all these Ansi Escape Markers here so they can be reused easily
// According to Daniel Imms @Tyriar, we need to do these this way:
// <133 A><prompt><133 B><command><133 C><command output>
pub(crate) const PRE_PROMPT_MARKER: &str = "\x1b]133;A\x1b\\";
pub(crate) const POST_PROMPT_MARKER: &str = "\x1b]133;B\x1b\\";
pub(crate) const PRE_EXECUTION_MARKER: &str = "\x1b]133;C\x1b\\";
#[allow(dead_code)]
pub(crate) const POST_EXECUTION_MARKER_PREFIX: &str = "\x1b]133;D;";
#[allow(dead_code)]
pub(crate) const POST_EXECUTION_MARKER_SUFFIX: &str = "\x1b\\";
// OSC633 is the same as OSC133 but specifically for VSCode
pub(crate) const VSCODE_PRE_PROMPT_MARKER: &str = "\x1b]633;A\x1b\\";
pub(crate) const VSCODE_POST_PROMPT_MARKER: &str = "\x1b]633;B\x1b\\";
#[allow(dead_code)]
pub(crate) const VSCODE_PRE_EXECUTION_MARKER: &str = "\x1b]633;C\x1b\\";
#[allow(dead_code)]
//"\x1b]633;D;{}\x1b\\"
pub(crate) const VSCODE_POST_EXECUTION_MARKER_PREFIX: &str = "\x1b]633;D;";
#[allow(dead_code)]
pub(crate) const VSCODE_POST_EXECUTION_MARKER_SUFFIX: &str = "\x1b\\";
#[allow(dead_code)]
pub(crate) const VSCODE_COMMANDLINE_MARKER: &str = "\x1b]633;E\x1b\\";
#[allow(dead_code)]
// "\x1b]633;P;Cwd={}\x1b\\"
pub(crate) const VSCODE_CWD_PROPERTY_MARKER_PREFIX: &str = "\x1b]633;P;Cwd=";
#[allow(dead_code)]
pub(crate) const VSCODE_CWD_PROPERTY_MARKER_SUFFIX: &str = "\x1b\\";
pub(crate) const RESET_APPLICATION_MODE: &str = "\x1b[?1l";
fn get_prompt_string(
prompt: &str,
@ -38,7 +65,7 @@ fn get_prompt_string(
.get_env_var(engine_state, prompt)
.and_then(|v| match v {
Value::Closure { val, .. } => {
let result = ClosureEvalOnce::new(engine_state, stack, val)
let result = ClosureEvalOnce::new(engine_state, stack, *val)
.run_with_input(PipelineData::Empty);
trace!(
@ -50,8 +77,7 @@ fn get_prompt_string(
result
.map_err(|err| {
let working_set = StateWorkingSet::new(engine_state);
report_error(&working_set, &err);
report_error_new(engine_state, &err);
})
.ok()
}
@ -81,20 +107,34 @@ pub(crate) fn update_prompt(
stack: &mut Stack,
nu_prompt: &mut NushellPrompt,
) {
let left_prompt_string = get_prompt_string(PROMPT_COMMAND, config, engine_state, stack);
let configured_left_prompt_string =
match get_prompt_string(PROMPT_COMMAND, config, engine_state, stack) {
Some(s) => s,
None => "".to_string(),
};
// Now that we have the prompt string lets ansify it.
// <133 A><prompt><133 B><command><133 C><command output>
let left_prompt_string = if config.shell_integration {
if let Some(prompt_string) = left_prompt_string {
let left_prompt_string = if config.shell_integration_osc633 {
if stack.get_env_var(engine_state, "TERM_PROGRAM") == Some(Value::test_string("vscode")) {
// We're in vscode and we have osc633 enabled
Some(format!(
"{PRE_PROMPT_MARKER}{prompt_string}{POST_PROMPT_MARKER}"
"{VSCODE_PRE_PROMPT_MARKER}{configured_left_prompt_string}{VSCODE_POST_PROMPT_MARKER}"
))
} else if config.shell_integration_osc133 {
// If we're in VSCode but we don't find the env var, but we have osc133 set, then use it
Some(format!(
"{PRE_PROMPT_MARKER}{configured_left_prompt_string}{POST_PROMPT_MARKER}"
))
} else {
left_prompt_string
configured_left_prompt_string.into()
}
} else if config.shell_integration_osc133 {
Some(format!(
"{PRE_PROMPT_MARKER}{configured_left_prompt_string}{POST_PROMPT_MARKER}"
))
} else {
left_prompt_string
configured_left_prompt_string.into()
};
let right_prompt_string = get_prompt_string(PROMPT_COMMAND_RIGHT, config, engine_state, stack);

View File

@ -1,6 +1,6 @@
use crate::{menus::NuMenuCompleter, NuHelpCompleter};
use crossterm::event::{KeyCode, KeyModifiers};
use log::trace;
use nu_ansi_term::Style;
use nu_color_config::{color_record_to_nustyle, lookup_ansi_color_style};
use nu_engine::eval_block;
use nu_parser::parse;
@ -75,15 +75,15 @@ const DEFAULT_HELP_MENU: &str = r#"
// Adds all menus to line editor
pub(crate) fn add_menus(
mut line_editor: Reedline,
engine_state: Arc<EngineState>,
engine_state_ref: Arc<EngineState>,
stack: &Stack,
config: &Config,
) -> Result<Reedline, ShellError> {
trace!("add_menus: config: {:#?}", &config);
//log::trace!("add_menus: config: {:#?}", &config);
line_editor = line_editor.clear_menus();
for menu in &config.menus {
line_editor = add_menu(line_editor, menu, engine_state.clone(), stack, config)?
line_editor = add_menu(line_editor, menu, engine_state_ref.clone(), stack, config)?
}
// Checking if the default menus have been added from the config file
@ -93,13 +93,16 @@ pub(crate) fn add_menus(
("help_menu", DEFAULT_HELP_MENU),
];
let mut engine_state = (*engine_state_ref).clone();
let mut menu_eval_results = vec![];
for (name, definition) in default_menus {
if !config
.menus
.iter()
.any(|menu| menu.name.to_expanded_string("", config) == name)
{
let (block, _) = {
let (block, delta) = {
let mut working_set = StateWorkingSet::new(&engine_state);
let output = parse(
&mut working_set,
@ -111,15 +114,31 @@ pub(crate) fn add_menus(
(output, working_set.render())
};
engine_state.merge_delta(delta)?;
let mut temp_stack = Stack::new().capture();
let input = PipelineData::Empty;
let res = eval_block::<WithoutDebug>(&engine_state, &mut temp_stack, &block, input)?;
menu_eval_results.push(eval_block::<WithoutDebug>(
&engine_state,
&mut temp_stack,
&block,
input,
)?);
}
}
if let PipelineData::Value(value, None) = res {
for menu in create_menus(&value)? {
line_editor =
add_menu(line_editor, &menu, engine_state.clone(), stack, config)?;
}
let new_engine_state_ref = Arc::new(engine_state);
for res in menu_eval_results.into_iter() {
if let PipelineData::Value(value, None) = res {
for menu in create_menus(&value)? {
line_editor = add_menu(
line_editor,
&menu,
new_engine_state_ref.clone(),
stack,
config,
)?;
}
}
}
@ -158,21 +177,14 @@ fn add_menu(
}
}
macro_rules! add_style {
// first arm match add!(1,2), add!(2,3) etc
($name:expr, $record: expr, $span:expr, $config: expr, $menu:expr, $f:expr) => {
$menu = match extract_value($name, $record, $span) {
Ok(text) => {
let style = match text {
Value::String { val, .. } => lookup_ansi_color_style(&val),
Value::Record { .. } => color_record_to_nustyle(&text),
_ => lookup_ansi_color_style("green"),
};
$f($menu, style)
}
Err(_) => $menu,
};
};
fn get_style(record: &Record, name: &str, span: Span) -> Option<Style> {
extract_value(name, record, span)
.ok()
.map(|text| match text {
Value::String { val, .. } => lookup_ansi_color_style(val),
Value::Record { .. } => color_record_to_nustyle(text),
_ => lookup_ansi_color_style("green"),
})
}
// Adds a columnar menu to the editor engine
@ -215,46 +227,21 @@ pub(crate) fn add_columnar_menu(
let span = menu.style.span();
if let Value::Record { val, .. } = &menu.style {
add_style!(
"text",
val,
span,
config,
columnar_menu,
ColumnarMenu::with_text_style
);
add_style!(
"selected_text",
val,
span,
config,
columnar_menu,
ColumnarMenu::with_selected_text_style
);
add_style!(
"description_text",
val,
span,
config,
columnar_menu,
ColumnarMenu::with_description_text_style
);
add_style!(
"match_text",
val,
span,
config,
columnar_menu,
ColumnarMenu::with_match_text_style
);
add_style!(
"selected_match_text",
val,
span,
config,
columnar_menu,
ColumnarMenu::with_selected_match_text_style
);
if let Some(style) = get_style(val, "text", span) {
columnar_menu = columnar_menu.with_text_style(style);
}
if let Some(style) = get_style(val, "selected_text", span) {
columnar_menu = columnar_menu.with_selected_text_style(style);
}
if let Some(style) = get_style(val, "description_text", span) {
columnar_menu = columnar_menu.with_description_text_style(style);
}
if let Some(style) = get_style(val, "match_text", span) {
columnar_menu = columnar_menu.with_match_text_style(style);
}
if let Some(style) = get_style(val, "selected_match_text", span) {
columnar_menu = columnar_menu.with_selected_match_text_style(style);
}
}
let marker = menu.marker.to_expanded_string("", config);
@ -313,30 +300,15 @@ pub(crate) fn add_list_menu(
let span = menu.style.span();
if let Value::Record { val, .. } = &menu.style {
add_style!(
"text",
val,
span,
config,
list_menu,
ListMenu::with_text_style
);
add_style!(
"selected_text",
val,
span,
config,
list_menu,
ListMenu::with_selected_text_style
);
add_style!(
"description_text",
val,
span,
config,
list_menu,
ListMenu::with_description_text_style
);
if let Some(style) = get_style(val, "text", span) {
list_menu = list_menu.with_text_style(style);
}
if let Some(style) = get_style(val, "selected_text", span) {
list_menu = list_menu.with_selected_text_style(style);
}
if let Some(style) = get_style(val, "description_text", span) {
list_menu = list_menu.with_description_text_style(style);
}
}
let marker = menu.marker.to_expanded_string("", config);
@ -520,46 +492,21 @@ pub(crate) fn add_ide_menu(
let span = menu.style.span();
if let Value::Record { val, .. } = &menu.style {
add_style!(
"text",
val,
span,
config,
ide_menu,
IdeMenu::with_text_style
);
add_style!(
"selected_text",
val,
span,
config,
ide_menu,
IdeMenu::with_selected_text_style
);
add_style!(
"description_text",
val,
span,
config,
ide_menu,
IdeMenu::with_description_text_style
);
add_style!(
"match_text",
val,
span,
config,
ide_menu,
IdeMenu::with_match_text_style
);
add_style!(
"selected_match_text",
val,
span,
config,
ide_menu,
IdeMenu::with_selected_match_text_style
);
if let Some(style) = get_style(val, "text", span) {
ide_menu = ide_menu.with_text_style(style);
}
if let Some(style) = get_style(val, "selected_text", span) {
ide_menu = ide_menu.with_selected_text_style(style);
}
if let Some(style) = get_style(val, "description_text", span) {
ide_menu = ide_menu.with_description_text_style(style);
}
if let Some(style) = get_style(val, "match_text", span) {
ide_menu = ide_menu.with_match_text_style(style);
}
if let Some(style) = get_style(val, "selected_match_text", span) {
ide_menu = ide_menu.with_selected_match_text_style(style);
}
}
let marker = menu.marker.to_expanded_string("", config);
@ -650,30 +597,15 @@ pub(crate) fn add_description_menu(
let span = menu.style.span();
if let Value::Record { val, .. } = &menu.style {
add_style!(
"text",
val,
span,
config,
description_menu,
DescriptionMenu::with_text_style
);
add_style!(
"selected_text",
val,
span,
config,
description_menu,
DescriptionMenu::with_selected_text_style
);
add_style!(
"description_text",
val,
span,
config,
description_menu,
DescriptionMenu::with_description_text_style
);
if let Some(style) = get_style(val, "text", span) {
description_menu = description_menu.with_text_style(style);
}
if let Some(style) = get_style(val, "selected_text", span) {
description_menu = description_menu.with_selected_text_style(style);
}
if let Some(style) = get_style(val, "description_text", span) {
description_menu = description_menu.with_description_text_style(style);
}
}
let marker = menu.marker.to_expanded_string("", config);

View File

@ -1,3 +1,9 @@
use crate::prompt_update::{
POST_EXECUTION_MARKER_PREFIX, POST_EXECUTION_MARKER_SUFFIX, PRE_EXECUTION_MARKER,
RESET_APPLICATION_MODE, VSCODE_CWD_PROPERTY_MARKER_PREFIX, VSCODE_CWD_PROPERTY_MARKER_SUFFIX,
VSCODE_POST_EXECUTION_MARKER_PREFIX, VSCODE_POST_EXECUTION_MARKER_SUFFIX,
VSCODE_PRE_EXECUTION_MARKER,
};
use crate::{
completions::NuCompleter,
nu_highlight::NoOpHighlighter,
@ -14,14 +20,14 @@ use nu_cmd_base::{
util::{get_editor, get_guaranteed_cwd},
};
use nu_color_config::StyleComputer;
use nu_engine::{convert_env_values, env_to_strings};
#[allow(deprecated)]
use nu_engine::{convert_env_values, current_dir_str, env_to_strings};
use nu_parser::{lex, parse, trim_quotes_str};
use nu_protocol::{
config::NuCursorShape,
engine::{EngineState, Stack, StateWorkingSet},
eval_const::create_nu_constant,
report_error_new, HistoryConfig, HistoryFileFormat, PipelineData, ShellError, Span, Spanned,
Value, NU_VARIABLE_ID,
Value,
};
use nu_utils::{
filesystem::{have_permission, PermissionResult},
@ -42,16 +48,6 @@ use std::{
};
use sysinfo::System;
// According to Daniel Imms @Tyriar, we need to do these this way:
// <133 A><prompt><133 B><command><133 C><command output>
// These first two have been moved to prompt_update to get as close as possible to the prompt.
// const PRE_PROMPT_MARKER: &str = "\x1b]133;A\x1b\\";
// const POST_PROMPT_MARKER: &str = "\x1b]133;B\x1b\\";
const PRE_EXECUTE_MARKER: &str = "\x1b]133;C\x1b\\";
// This one is in get_command_finished_marker() now so we can capture the exit codes properly.
// const CMD_FINISHED_MARKER: &str = "\x1b]133;D;{}\x1b\\";
const RESET_APPLICATION_MODE: &str = "\x1b[?1l";
/// The main REPL loop, including spinning up the prompt itself.
pub fn evaluate_repl(
engine_state: &mut EngineState,
@ -66,7 +62,7 @@ pub fn evaluate_repl(
// so that it may be read by various reedline plugins. During this, we
// can't modify the stack, but at the end of the loop we take back ownership
// from the Arc. This lets us avoid copying stack variables needlessly
let mut unique_stack = stack;
let mut unique_stack = stack.clone();
let config = engine_state.get_config();
let use_color = config.use_ansi_coloring;
@ -74,12 +70,23 @@ pub fn evaluate_repl(
let mut entry_num = 0;
let shell_integration = config.shell_integration;
let nu_prompt = NushellPrompt::new(shell_integration);
// Let's grab the shell_integration configs
let shell_integration_osc2 = config.shell_integration_osc2;
let shell_integration_osc7 = config.shell_integration_osc7;
let shell_integration_osc9_9 = config.shell_integration_osc9_9;
let shell_integration_osc133 = config.shell_integration_osc133;
let shell_integration_osc633 = config.shell_integration_osc633;
let nu_prompt = NushellPrompt::new(
shell_integration_osc133,
shell_integration_osc633,
engine_state.clone(),
stack.clone(),
);
let start_time = std::time::Instant::now();
// Translate environment variables from Strings to Values
if let Some(e) = convert_env_values(engine_state, &unique_stack) {
if let Err(e) = convert_env_values(engine_state, &unique_stack) {
report_error_new(engine_state, &e);
}
perf(
@ -116,15 +123,28 @@ pub fn evaluate_repl(
}
let hostname = System::host_name();
if shell_integration {
shell_integration_osc_7_633_2(hostname.as_deref(), engine_state, &mut unique_stack);
if shell_integration_osc2 {
run_shell_integration_osc2(None, engine_state, &mut unique_stack, use_color);
}
if shell_integration_osc7 {
run_shell_integration_osc7(
hostname.as_deref(),
engine_state,
&mut unique_stack,
use_color,
);
}
if shell_integration_osc9_9 {
run_shell_integration_osc9_9(engine_state, &mut unique_stack, use_color);
}
if shell_integration_osc633 {
run_shell_integration_osc633(engine_state, &mut unique_stack, use_color);
}
engine_state.set_startup_time(entire_start_time.elapsed().as_nanos() as i64);
// Regenerate the $nu constant to contain the startup time and any other potential updates
let nu_const = create_nu_constant(engine_state, Span::unknown())?;
engine_state.set_variable_const_val(NU_VARIABLE_ID, nu_const);
engine_state.generate_nu_constant();
if load_std_lib.is_none() && engine_state.get_config().show_banner {
eval_source(
@ -367,7 +387,7 @@ fn loop_iteration(ctx: LoopContext) -> (bool, Stack, Reedline) {
.with_completer(Box::new(NuCompleter::new(
engine_reference.clone(),
// STACK-REFERENCE 2
Stack::with_parent(stack_arc.clone()),
stack_arc.clone(),
)))
.with_quick_completions(config.quick_completions)
.with_partial_completions(config.partial_completions)
@ -513,9 +533,18 @@ fn loop_iteration(ctx: LoopContext) -> (bool, Stack, Reedline) {
.with_highlighter(Box::<NoOpHighlighter>::default())
// CLEAR STACK-REFERENCE 2
.with_completer(Box::<DefaultCompleter>::default());
let shell_integration = config.shell_integration;
let mut stack = Stack::unwrap_unique(stack_arc);
// Let's grab the shell_integration configs
let shell_integration_osc2 = config.shell_integration_osc2;
let shell_integration_osc7 = config.shell_integration_osc7;
let shell_integration_osc9_9 = config.shell_integration_osc9_9;
let shell_integration_osc133 = config.shell_integration_osc133;
let shell_integration_osc633 = config.shell_integration_osc633;
let shell_integration_reset_application_mode = config.shell_integration_reset_application_mode;
// TODO: we may clone the stack, this can lead to major performance issues
// so we should avoid it or making stack cheaper to clone.
let mut stack = Arc::unwrap_or_clone(stack_arc);
perf(
"line_editor setup",
@ -575,10 +604,40 @@ fn loop_iteration(ctx: LoopContext) -> (bool, Stack, Reedline) {
repl.buffer = line_editor.current_buffer_contents().to_string();
drop(repl);
if shell_integration {
if shell_integration_osc633 {
if stack.get_env_var(engine_state, "TERM_PROGRAM")
== Some(Value::test_string("vscode"))
{
start_time = Instant::now();
run_ansi_sequence(VSCODE_PRE_EXECUTION_MARKER);
perf(
"pre_execute_marker (633;C) ansi escape sequence",
start_time,
file!(),
line!(),
column!(),
use_color,
);
} else if shell_integration_osc133 {
start_time = Instant::now();
run_ansi_sequence(PRE_EXECUTION_MARKER);
perf(
"pre_execute_marker (133;C) ansi escape sequence",
start_time,
file!(),
line!(),
column!(),
use_color,
);
}
} else if shell_integration_osc133 {
start_time = Instant::now();
run_ansi_sequence(PRE_EXECUTE_MARKER);
run_ansi_sequence(PRE_EXECUTION_MARKER);
perf(
"pre_execute_marker (133;C) ansi escape sequence",
@ -598,20 +657,13 @@ fn loop_iteration(ctx: LoopContext) -> (bool, Stack, Reedline) {
ReplOperation::AutoCd { cwd, target, span } => {
do_auto_cd(target, cwd, &mut stack, engine_state, span);
if shell_integration {
start_time = Instant::now();
run_ansi_sequence(&get_command_finished_marker(&stack, engine_state));
perf(
"post_execute_marker (133;D) ansi escape sequences",
start_time,
file!(),
line!(),
column!(),
use_color,
);
}
run_finaliziation_ansi_sequence(
&stack,
engine_state,
use_color,
shell_integration_osc633,
shell_integration_osc133,
);
}
ReplOperation::RunCommand(cmd) => {
line_editor = do_run_cmd(
@ -619,25 +671,18 @@ fn loop_iteration(ctx: LoopContext) -> (bool, Stack, Reedline) {
&mut stack,
engine_state,
line_editor,
shell_integration,
shell_integration_osc2,
*entry_num,
use_color,
);
if shell_integration {
start_time = Instant::now();
run_ansi_sequence(&get_command_finished_marker(&stack, engine_state));
perf(
"post_execute_marker (133;D) ansi escape sequences",
start_time,
file!(),
line!(),
column!(),
use_color,
);
}
run_finaliziation_ansi_sequence(
&stack,
engine_state,
use_color,
shell_integration_osc633,
shell_integration_osc133,
);
}
// as the name implies, we do nothing in this case
ReplOperation::DoNothing => {}
@ -663,56 +708,45 @@ fn loop_iteration(ctx: LoopContext) -> (bool, Stack, Reedline) {
}
}
if shell_integration {
start_time = Instant::now();
shell_integration_osc_7_633_2(hostname, engine_state, &mut stack);
perf(
"shell_integration_finalize ansi escape sequences",
start_time,
file!(),
line!(),
column!(),
use_color,
);
if shell_integration_osc2 {
run_shell_integration_osc2(None, engine_state, &mut stack, use_color);
}
if shell_integration_osc7 {
run_shell_integration_osc7(hostname, engine_state, &mut stack, use_color);
}
if shell_integration_osc9_9 {
run_shell_integration_osc9_9(engine_state, &mut stack, use_color);
}
if shell_integration_osc633 {
run_shell_integration_osc633(engine_state, &mut stack, use_color);
}
if shell_integration_reset_application_mode {
run_shell_integration_reset_application_mode();
}
flush_engine_state_repl_buffer(engine_state, &mut line_editor);
}
Ok(Signal::CtrlC) => {
// `Reedline` clears the line content. New prompt is shown
if shell_integration {
start_time = Instant::now();
run_ansi_sequence(&get_command_finished_marker(&stack, engine_state));
perf(
"command_finished_marker ansi escape sequence",
start_time,
file!(),
line!(),
column!(),
use_color,
);
}
run_finaliziation_ansi_sequence(
&stack,
engine_state,
use_color,
shell_integration_osc633,
shell_integration_osc133,
);
}
Ok(Signal::CtrlD) => {
// When exiting clear to a new line
if shell_integration {
start_time = Instant::now();
run_ansi_sequence(&get_command_finished_marker(&stack, engine_state));
run_finaliziation_ansi_sequence(
&stack,
engine_state,
use_color,
shell_integration_osc633,
shell_integration_osc133,
);
perf(
"command_finished_marker ansi escape sequence",
start_time,
file!(),
line!(),
column!(),
use_color,
);
}
println!();
return (false, stack, line_editor);
}
@ -725,20 +759,14 @@ fn loop_iteration(ctx: LoopContext) -> (bool, Stack, Reedline) {
// e.g. https://github.com/nushell/nushell/issues/6452
// Alternatively only allow that expected failures let the REPL loop
}
if shell_integration {
start_time = Instant::now();
run_ansi_sequence(&get_command_finished_marker(&stack, engine_state));
perf(
"command_finished_marker ansi escape sequence",
start_time,
file!(),
line!(),
column!(),
use_color,
);
}
run_finaliziation_ansi_sequence(
&stack,
engine_state,
use_color,
shell_integration_osc633,
shell_integration_osc133,
);
}
}
perf(
@ -772,6 +800,7 @@ fn prepare_history_metadata(
line_editor: &mut Reedline,
) {
if !s.is_empty() && line_editor.has_last_command_context() {
#[allow(deprecated)]
let result = line_editor
.update_last_command_context(&|mut c| {
c.start_timestamp = Some(chrono::Utc::now());
@ -842,7 +871,8 @@ fn parse_operation(
) -> Result<ReplOperation, ErrReport> {
let tokens = lex(s.as_bytes(), 0, &[], &[], false);
// Check if this is a single call to a directory, if so auto-cd
let cwd = nu_engine::env::current_dir_str(engine_state, stack)?;
#[allow(deprecated)]
let cwd = nu_engine::env::current_dir_str(engine_state, stack).unwrap_or_default();
let mut orig = s.clone();
if orig.starts_with('`') {
orig = trim_quotes_str(&orig).to_string()
@ -882,8 +912,6 @@ fn do_auto_cd(
},
);
}
let path = nu_path::canonicalize_with(path, &cwd)
.expect("internal error: cannot canonicalize known path");
path.to_string_lossy().to_string()
};
@ -901,7 +929,10 @@ fn do_auto_cd(
//FIXME: this only changes the current scope, but instead this environment variable
//should probably be a block that loads the information from the state in the overlay
stack.add_env_var("PWD".into(), Value::string(path.clone(), Span::unknown()));
if let Err(err) = stack.set_cwd(&path) {
report_error_new(engine_state, &err);
return;
};
let cwd = Value::string(cwd, span);
let shells = stack.get_env_var(engine_state, "NUSHELL_SHELLS");
@ -946,7 +977,7 @@ fn do_run_cmd(
// we pass in the line editor so it can be dropped in the case of a process exit
// (in the normal case we don't want to drop it so return it as-is otherwise)
line_editor: Reedline,
shell_integration: bool,
shell_integration_osc2: bool,
entry_num: usize,
use_color: bool,
) -> Reedline {
@ -973,39 +1004,8 @@ fn do_run_cmd(
}
}
if shell_integration {
let start_time = Instant::now();
if let Some(cwd) = stack.get_env_var(engine_state, "PWD") {
match cwd.coerce_into_string() {
Ok(path) => {
// Try to abbreviate string for windows title
let maybe_abbrev_path = if let Some(p) = nu_path::home_dir() {
path.replace(&p.as_path().display().to_string(), "~")
} else {
path
};
let binary_name = s.split_whitespace().next();
if let Some(binary_name) = binary_name {
run_ansi_sequence(&format!(
"\x1b]2;{maybe_abbrev_path}> {binary_name}\x07"
));
}
}
Err(e) => {
warn!("Could not coerce working directory to string {e}");
}
}
}
perf(
"set title with command ansi escape sequence",
start_time,
file!(),
line!(),
column!(),
use_color,
);
if shell_integration_osc2 {
run_shell_integration_osc2(Some(s), engine_state, stack, use_color);
}
eval_source(
@ -1025,54 +1025,136 @@ fn do_run_cmd(
/// can have more information about what is going on (both on startup and after we have
/// run a command)
///
fn shell_integration_osc_7_633_2(
fn run_shell_integration_osc2(
command_name: Option<&str>,
engine_state: &EngineState,
stack: &mut Stack,
use_color: bool,
) {
#[allow(deprecated)]
if let Ok(path) = current_dir_str(engine_state, stack) {
let start_time = Instant::now();
// Try to abbreviate string for windows title
let maybe_abbrev_path = if let Some(p) = nu_path::home_dir() {
path.replace(&p.as_path().display().to_string(), "~")
} else {
path
};
let title = match command_name {
Some(binary_name) => {
let split_binary_name = binary_name.split_whitespace().next();
if let Some(binary_name) = split_binary_name {
format!("{maybe_abbrev_path}> {binary_name}")
} else {
maybe_abbrev_path.to_string()
}
}
None => maybe_abbrev_path.to_string(),
};
// Set window title too
// https://tldp.org/HOWTO/Xterm-Title-3.html
// ESC]0;stringBEL -- Set icon name and window title to string
// ESC]1;stringBEL -- Set icon name to string
// ESC]2;stringBEL -- Set window title to string
run_ansi_sequence(&format!("\x1b]2;{title}\x07"));
perf(
"set title with command osc2",
start_time,
file!(),
line!(),
column!(),
use_color,
);
}
}
fn run_shell_integration_osc7(
hostname: Option<&str>,
engine_state: &EngineState,
stack: &mut Stack,
use_color: bool,
) {
if let Some(cwd) = stack.get_env_var(engine_state, "PWD") {
match cwd.coerce_into_string() {
Ok(path) => {
// Supported escape sequences of Microsoft's Visual Studio Code (vscode)
// https://code.visualstudio.com/docs/terminal/shell-integration#_supported-escape-sequences
if stack.get_env_var(engine_state, "TERM_PROGRAM")
== Some(Value::test_string("vscode"))
{
// If we're in vscode, run their specific ansi escape sequence.
// This is helpful for ctrl+g to change directories in the terminal.
run_ansi_sequence(&format!("\x1b]633;P;Cwd={}\x1b\\", path));
} else {
// Otherwise, communicate the path as OSC 7 (often used for spawning new tabs in the same dir)
run_ansi_sequence(&format!(
"\x1b]7;file://{}{}{}\x1b\\",
percent_encoding::utf8_percent_encode(
hostname.unwrap_or("localhost"),
percent_encoding::CONTROLS
),
if path.starts_with('/') { "" } else { "/" },
percent_encoding::utf8_percent_encode(&path, percent_encoding::CONTROLS)
));
}
#[allow(deprecated)]
if let Ok(path) = current_dir_str(engine_state, stack) {
let start_time = Instant::now();
// Try to abbreviate string for windows title
let maybe_abbrev_path = if let Some(p) = nu_path::home_dir() {
path.replace(&p.as_path().display().to_string(), "~")
} else {
path
};
// Otherwise, communicate the path as OSC 7 (often used for spawning new tabs in the same dir)
run_ansi_sequence(&format!(
"\x1b]7;file://{}{}{}\x1b\\",
percent_encoding::utf8_percent_encode(
hostname.unwrap_or("localhost"),
percent_encoding::CONTROLS
),
if path.starts_with('/') { "" } else { "/" },
percent_encoding::utf8_percent_encode(&path, percent_encoding::CONTROLS)
));
// Set window title too
// https://tldp.org/HOWTO/Xterm-Title-3.html
// ESC]0;stringBEL -- Set icon name and window title to string
// ESC]1;stringBEL -- Set icon name to string
// ESC]2;stringBEL -- Set window title to string
run_ansi_sequence(&format!("\x1b]2;{maybe_abbrev_path}\x07"));
}
Err(e) => {
warn!("Could not coerce working directory to string {e}");
}
perf(
"communicate path to terminal with osc7",
start_time,
file!(),
line!(),
column!(),
use_color,
);
}
}
fn run_shell_integration_osc9_9(engine_state: &EngineState, stack: &mut Stack, use_color: bool) {
#[allow(deprecated)]
if let Ok(path) = current_dir_str(engine_state, stack) {
let start_time = Instant::now();
// Otherwise, communicate the path as OSC 9;9 from ConEmu (often used for spawning new tabs in the same dir)
// This is helpful in Windows Terminal with Duplicate Tab
run_ansi_sequence(&format!(
"\x1b]9;9;{}\x1b\\",
percent_encoding::utf8_percent_encode(&path, percent_encoding::CONTROLS)
));
perf(
"communicate path to terminal with osc9;9",
start_time,
file!(),
line!(),
column!(),
use_color,
);
}
}
fn run_shell_integration_osc633(engine_state: &EngineState, stack: &mut Stack, use_color: bool) {
#[allow(deprecated)]
if let Ok(path) = current_dir_str(engine_state, stack) {
// Supported escape sequences of Microsoft's Visual Studio Code (vscode)
// https://code.visualstudio.com/docs/terminal/shell-integration#_supported-escape-sequences
if stack.get_env_var(engine_state, "TERM_PROGRAM") == Some(Value::test_string("vscode")) {
let start_time = Instant::now();
// If we're in vscode, run their specific ansi escape sequence.
// This is helpful for ctrl+g to change directories in the terminal.
run_ansi_sequence(&format!(
"{}{}{}",
VSCODE_CWD_PROPERTY_MARKER_PREFIX, path, VSCODE_CWD_PROPERTY_MARKER_SUFFIX
));
perf(
"communicate path to terminal with osc633;P",
start_time,
file!(),
line!(),
column!(),
use_color,
);
}
}
}
fn run_shell_integration_reset_application_mode() {
run_ansi_sequence(RESET_APPLICATION_MODE);
}
@ -1219,12 +1301,47 @@ fn map_nucursorshape_to_cursorshape(shape: NuCursorShape) -> Option<SetCursorSty
}
}
fn get_command_finished_marker(stack: &Stack, engine_state: &EngineState) -> String {
fn get_command_finished_marker(
stack: &Stack,
engine_state: &EngineState,
shell_integration_osc633: bool,
shell_integration_osc133: bool,
) -> String {
let exit_code = stack
.get_env_var(engine_state, "LAST_EXIT_CODE")
.and_then(|e| e.as_i64().ok());
format!("\x1b]133;D;{}\x1b\\", exit_code.unwrap_or(0))
if shell_integration_osc633 {
if stack.get_env_var(engine_state, "TERM_PROGRAM") == Some(Value::test_string("vscode")) {
// We're in vscode and we have osc633 enabled
format!(
"{}{}{}",
VSCODE_POST_EXECUTION_MARKER_PREFIX,
exit_code.unwrap_or(0),
VSCODE_POST_EXECUTION_MARKER_SUFFIX
)
} else if shell_integration_osc133 {
// If we're in VSCode but we don't find the env var, just return the regular markers
format!(
"{}{}{}",
POST_EXECUTION_MARKER_PREFIX,
exit_code.unwrap_or(0),
POST_EXECUTION_MARKER_SUFFIX
)
} else {
// We're not in vscode, so we don't need to do anything special
"\x1b[0m".to_string()
}
} else if shell_integration_osc133 {
format!(
"{}{}{}",
POST_EXECUTION_MARKER_PREFIX,
exit_code.unwrap_or(0),
POST_EXECUTION_MARKER_SUFFIX
)
} else {
"\x1b[0m".to_string()
}
}
fn run_ansi_sequence(seq: &str) {
@ -1235,6 +1352,73 @@ fn run_ansi_sequence(seq: &str) {
}
}
fn run_finaliziation_ansi_sequence(
stack: &Stack,
engine_state: &EngineState,
use_color: bool,
shell_integration_osc633: bool,
shell_integration_osc133: bool,
) {
if shell_integration_osc633 {
// Only run osc633 if we are in vscode
if stack.get_env_var(engine_state, "TERM_PROGRAM") == Some(Value::test_string("vscode")) {
let start_time = Instant::now();
run_ansi_sequence(&get_command_finished_marker(
stack,
engine_state,
shell_integration_osc633,
shell_integration_osc133,
));
perf(
"post_execute_marker (633;D) ansi escape sequences",
start_time,
file!(),
line!(),
column!(),
use_color,
);
} else if shell_integration_osc133 {
let start_time = Instant::now();
run_ansi_sequence(&get_command_finished_marker(
stack,
engine_state,
shell_integration_osc633,
shell_integration_osc133,
));
perf(
"post_execute_marker (133;D) ansi escape sequences",
start_time,
file!(),
line!(),
column!(),
use_color,
);
}
} else if shell_integration_osc133 {
let start_time = Instant::now();
run_ansi_sequence(&get_command_finished_marker(
stack,
engine_state,
shell_integration_osc633,
shell_integration_osc133,
));
perf(
"post_execute_marker (133;D) ansi escape sequences",
start_time,
file!(),
line!(),
column!(),
use_color,
);
}
}
// Absolute paths with a drive letter, like 'C:', 'D:\', 'E:\foo'
#[cfg(windows)]
static DRIVE_PATH_REGEX: once_cell::sync::Lazy<fancy_regex::Regex> =
@ -1300,3 +1484,136 @@ fn are_session_ids_in_sync() {
engine_state.history_session_id
);
}
#[cfg(test)]
mod test_auto_cd {
use super::{do_auto_cd, parse_operation, ReplOperation};
use nu_protocol::engine::{EngineState, Stack};
use std::path::Path;
use tempfile::tempdir;
/// Create a symlink. Works on both Unix and Windows.
#[cfg(any(unix, windows))]
fn symlink(original: impl AsRef<Path>, link: impl AsRef<Path>) -> std::io::Result<()> {
#[cfg(unix)]
{
std::os::unix::fs::symlink(original, link)
}
#[cfg(windows)]
{
if original.as_ref().is_dir() {
std::os::windows::fs::symlink_dir(original, link)
} else {
std::os::windows::fs::symlink_file(original, link)
}
}
}
/// Run one test case on the auto-cd feature. PWD is initially set to
/// `before`, and after `input` is parsed and evaluated, PWD should be
/// changed to `after`.
#[track_caller]
fn check(before: impl AsRef<Path>, input: &str, after: impl AsRef<Path>) {
// Setup EngineState and Stack.
let mut engine_state = EngineState::new();
let mut stack = Stack::new();
stack.set_cwd(before).unwrap();
// Parse the input. It must be an auto-cd operation.
let op = parse_operation(input.to_string(), &engine_state, &stack).unwrap();
let ReplOperation::AutoCd { cwd, target, span } = op else {
panic!("'{}' was not parsed into an auto-cd operation", input)
};
// Perform the auto-cd operation.
do_auto_cd(target, cwd, &mut stack, &mut engine_state, span);
let updated_cwd = engine_state.cwd(Some(&stack)).unwrap();
// Check that `updated_cwd` and `after` point to the same place. They
// don't have to be byte-wise equal (on Windows, the 8.3 filename
// conversion messes things up),
let updated_cwd = std::fs::canonicalize(updated_cwd).unwrap();
let after = std::fs::canonicalize(after).unwrap();
assert_eq!(updated_cwd, after);
}
#[test]
fn auto_cd_root() {
let tempdir = tempdir().unwrap();
let root = if cfg!(windows) { r"C:\" } else { "/" };
check(&tempdir, root, root);
}
#[test]
fn auto_cd_tilde() {
let tempdir = tempdir().unwrap();
let home = nu_path::home_dir().unwrap();
check(&tempdir, "~", home);
}
#[test]
fn auto_cd_dot() {
let tempdir = tempdir().unwrap();
check(&tempdir, ".", &tempdir);
}
#[test]
fn auto_cd_double_dot() {
let tempdir = tempdir().unwrap();
let dir = tempdir.path().join("foo");
std::fs::create_dir_all(&dir).unwrap();
check(dir, "..", &tempdir);
}
#[test]
fn auto_cd_triple_dot() {
let tempdir = tempdir().unwrap();
let dir = tempdir.path().join("foo").join("bar");
std::fs::create_dir_all(&dir).unwrap();
check(dir, "...", &tempdir);
}
#[test]
fn auto_cd_relative() {
let tempdir = tempdir().unwrap();
let foo = tempdir.path().join("foo");
let bar = tempdir.path().join("bar");
std::fs::create_dir_all(&foo).unwrap();
std::fs::create_dir_all(&bar).unwrap();
let input = if cfg!(windows) { r"..\bar" } else { "../bar" };
check(foo, input, bar);
}
#[test]
fn auto_cd_trailing_slash() {
let tempdir = tempdir().unwrap();
let dir = tempdir.path().join("foo");
std::fs::create_dir_all(&dir).unwrap();
let input = if cfg!(windows) { r"foo\" } else { "foo/" };
check(&tempdir, input, dir);
}
#[test]
fn auto_cd_symlink() {
let tempdir = tempdir().unwrap();
let dir = tempdir.path().join("foo");
std::fs::create_dir_all(&dir).unwrap();
let link = tempdir.path().join("link");
symlink(&dir, &link).unwrap();
let input = if cfg!(windows) { r".\link" } else { "./link" };
check(&tempdir, input, link);
}
#[test]
#[should_panic(expected = "was not parsed into an auto-cd operation")]
fn auto_cd_nonexistent_directory() {
let tempdir = tempdir().unwrap();
let dir = tempdir.path().join("foo");
let input = if cfg!(windows) { r"foo\" } else { "foo/" };
check(&tempdir, input, dir);
}
}

View File

@ -4,7 +4,7 @@ use nu_color_config::{get_matching_brackets_style, get_shape_color};
use nu_engine::env;
use nu_parser::{flatten_block, parse, FlatShape};
use nu_protocol::{
ast::{Argument, Block, Expr, Expression, PipelineRedirection, RecordItem},
ast::{Block, Expr, Expression, PipelineRedirection, RecordItem},
engine::{EngineState, Stack, StateWorkingSet},
Config, Span,
};
@ -37,6 +37,7 @@ impl Highlighter for NuHighlighter {
let str_word = String::from_utf8_lossy(str_contents).to_string();
let paths = env::path_str(&self.engine_state, &self.stack, *span).ok();
#[allow(deprecated)]
let res = if let Ok(cwd) =
env::current_dir_str(&self.engine_state, &self.stack)
{
@ -86,29 +87,8 @@ impl Highlighter for NuHighlighter {
[(shape.0.start - global_span_offset)..(shape.0.end - global_span_offset)]
.to_string();
macro_rules! add_colored_token_with_bracket_highlight {
($shape:expr, $span:expr, $text:expr) => {{
let spans = split_span_by_highlight_positions(
line,
$span,
&matching_brackets_pos,
global_span_offset,
);
spans.iter().for_each(|(part, highlight)| {
let start = part.start - $span.start;
let end = part.end - $span.start;
let text = (&next_token[start..end]).to_string();
let mut style = get_shape_color($shape.to_string(), &self.config);
if *highlight {
style = get_matching_brackets_style(style, &self.config);
}
output.push((style, text));
});
}};
}
let mut add_colored_token = |shape: &FlatShape, text: String| {
output.push((get_shape_color(shape.to_string(), &self.config), text));
output.push((get_shape_color(shape.as_str(), &self.config), text));
};
match shape.1 {
@ -128,27 +108,37 @@ impl Highlighter for NuHighlighter {
FlatShape::Operator => add_colored_token(&shape.1, next_token),
FlatShape::Signature => add_colored_token(&shape.1, next_token),
FlatShape::String => add_colored_token(&shape.1, next_token),
FlatShape::RawString => add_colored_token(&shape.1, next_token),
FlatShape::StringInterpolation => add_colored_token(&shape.1, next_token),
FlatShape::DateTime => add_colored_token(&shape.1, next_token),
FlatShape::List => {
add_colored_token_with_bracket_highlight!(shape.1, shape.0, next_token)
}
FlatShape::Table => {
add_colored_token_with_bracket_highlight!(shape.1, shape.0, next_token)
}
FlatShape::Record => {
add_colored_token_with_bracket_highlight!(shape.1, shape.0, next_token)
}
FlatShape::Block => {
add_colored_token_with_bracket_highlight!(shape.1, shape.0, next_token)
}
FlatShape::Closure => {
add_colored_token_with_bracket_highlight!(shape.1, shape.0, next_token)
FlatShape::List
| FlatShape::Table
| FlatShape::Record
| FlatShape::Block
| FlatShape::Closure => {
let span = shape.0;
let shape = &shape.1;
let spans = split_span_by_highlight_positions(
line,
span,
&matching_brackets_pos,
global_span_offset,
);
for (part, highlight) in spans {
let start = part.start - span.start;
let end = part.end - span.start;
let text = next_token[start..end].to_string();
let mut style = get_shape_color(shape.as_str(), &self.config);
if highlight {
style = get_matching_brackets_style(style, &self.config);
}
output.push((style, text));
}
}
FlatShape::Filepath => add_colored_token(&shape.1, next_token),
FlatShape::Directory => add_colored_token(&shape.1, next_token),
FlatShape::GlobInterpolation => add_colored_token(&shape.1, next_token),
FlatShape::GlobPattern => add_colored_token(&shape.1, next_token),
FlatShape::Variable(_) | FlatShape::VarDecl(_) => {
add_colored_token(&shape.1, next_token)
@ -310,20 +300,6 @@ fn find_matching_block_end_in_expr(
global_span_offset: usize,
global_cursor_offset: usize,
) -> Option<usize> {
macro_rules! find_in_expr_or_continue {
($inner_expr:ident) => {
if let Some(pos) = find_matching_block_end_in_expr(
line,
working_set,
$inner_expr,
global_span_offset,
global_cursor_offset,
) {
return Some(pos);
}
};
}
if expression.span.contains(global_cursor_offset) && expression.span.start >= global_span_offset
{
let expr_first = expression.span.start;
@ -353,6 +329,7 @@ fn find_matching_block_end_in_expr(
Expr::Directory(_, _) => None,
Expr::GlobPattern(_, _) => None,
Expr::String(_) => None,
Expr::RawString(_) => None,
Expr::CellPath(_) => None,
Expr::ImportPattern(_) => None,
Expr::Overlay(_) => None,
@ -370,15 +347,19 @@ fn find_matching_block_end_in_expr(
Some(expr_last)
} else {
// cursor is inside table
for inner_expr in table.columns.as_ref() {
find_in_expr_or_continue!(inner_expr);
}
for row in table.rows.as_ref() {
for inner_expr in row.as_ref() {
find_in_expr_or_continue!(inner_expr);
}
}
None
table
.columns
.iter()
.chain(table.rows.iter().flat_map(AsRef::as_ref))
.find_map(|expr| {
find_matching_block_end_in_expr(
line,
working_set,
expr,
global_span_offset,
global_cursor_offset,
)
})
}
}
@ -391,36 +372,45 @@ fn find_matching_block_end_in_expr(
Some(expr_last)
} else {
// cursor is inside record
for expr in exprs {
match expr {
RecordItem::Pair(k, v) => {
find_in_expr_or_continue!(k);
find_in_expr_or_continue!(v);
}
RecordItem::Spread(_, record) => {
find_in_expr_or_continue!(record);
}
}
}
None
exprs.iter().find_map(|expr| match expr {
RecordItem::Pair(k, v) => find_matching_block_end_in_expr(
line,
working_set,
k,
global_span_offset,
global_cursor_offset,
)
.or_else(|| {
find_matching_block_end_in_expr(
line,
working_set,
v,
global_span_offset,
global_cursor_offset,
)
}),
RecordItem::Spread(_, record) => find_matching_block_end_in_expr(
line,
working_set,
record,
global_span_offset,
global_cursor_offset,
),
})
}
}
Expr::Call(call) => {
for arg in &call.arguments {
let opt_expr = match arg {
Argument::Named((_, _, opt_expr)) => opt_expr.as_ref(),
Argument::Positional(inner_expr) => Some(inner_expr),
Argument::Unknown(inner_expr) => Some(inner_expr),
Argument::Spread(inner_expr) => Some(inner_expr),
};
if let Some(inner_expr) = opt_expr {
find_in_expr_or_continue!(inner_expr);
}
}
None
}
Expr::Call(call) => call.arguments.iter().find_map(|arg| {
arg.expr().and_then(|expr| {
find_matching_block_end_in_expr(
line,
working_set,
expr,
global_span_offset,
global_cursor_offset,
)
})
}),
Expr::FullCellPath(b) => find_matching_block_end_in_expr(
line,
@ -430,12 +420,15 @@ fn find_matching_block_end_in_expr(
global_cursor_offset,
),
Expr::BinaryOp(lhs, op, rhs) => {
find_in_expr_or_continue!(lhs);
find_in_expr_or_continue!(op);
find_in_expr_or_continue!(rhs);
None
}
Expr::BinaryOp(lhs, op, rhs) => [lhs, op, rhs].into_iter().find_map(|expr| {
find_matching_block_end_in_expr(
line,
working_set,
expr,
global_span_offset,
global_cursor_offset,
)
}),
Expr::Block(block_id)
| Expr::Closure(block_id)
@ -460,11 +453,16 @@ fn find_matching_block_end_in_expr(
}
}
Expr::StringInterpolation(inner_expr) => {
for inner_expr in inner_expr {
find_in_expr_or_continue!(inner_expr);
}
None
Expr::StringInterpolation(exprs) | Expr::GlobInterpolation(exprs, _) => {
exprs.iter().find_map(|expr| {
find_matching_block_end_in_expr(
line,
working_set,
expr,
global_span_offset,
global_cursor_offset,
)
})
}
Expr::List(list) => {
@ -475,12 +473,15 @@ fn find_matching_block_end_in_expr(
// cursor is at list start
Some(expr_last)
} else {
// cursor is inside list
for item in list {
let expr = item.expr();
find_in_expr_or_continue!(expr);
}
None
list.iter().find_map(|item| {
find_matching_block_end_in_expr(
line,
working_set,
item.expr(),
global_span_offset,
global_cursor_offset,
)
})
}
}
};

View File

@ -4,7 +4,7 @@ use nu_parser::{escape_quote_string, lex, parse, unescape_unquote_string, Token,
use nu_protocol::{
debugger::WithoutDebug,
engine::{EngineState, Stack, StateWorkingSet},
print_if_stream, report_error, report_error_new, PipelineData, ShellError, Span, Value,
report_error, report_error_new, PipelineData, ShellError, Span, Value,
};
#[cfg(windows)]
use nu_utils::enable_vt_processing;
@ -39,9 +39,8 @@ fn gather_env_vars(
init_cwd: &Path,
) {
fn report_capture_error(engine_state: &EngineState, env_str: &str, msg: &str) {
let working_set = StateWorkingSet::new(engine_state);
report_error(
&working_set,
report_error_new(
engine_state,
&ShellError::GenericError {
error: format!("Environment variable was not captured: {env_str}"),
msg: "".into(),
@ -71,9 +70,8 @@ fn gather_env_vars(
}
None => {
// Could not capture current working directory
let working_set = StateWorkingSet::new(engine_state);
report_error(
&working_set,
report_error_new(
engine_state,
&ShellError::GenericError {
error: "Current directory is not a valid utf-8 path".into(),
msg: "".into(),
@ -208,9 +206,48 @@ pub fn eval_source(
fname: &str,
input: PipelineData,
allow_return: bool,
) -> bool {
) -> i32 {
let start_time = std::time::Instant::now();
let exit_code = match evaluate_source(engine_state, stack, source, fname, input, allow_return) {
Ok(code) => code.unwrap_or(0),
Err(err) => {
report_error_new(engine_state, &err);
1
}
};
stack.add_env_var(
"LAST_EXIT_CODE".to_string(),
Value::int(exit_code.into(), Span::unknown()),
);
// reset vt processing, aka ansi because illbehaved externals can break it
#[cfg(windows)]
{
let _ = enable_vt_processing();
}
perf(
&format!("eval_source {}", &fname),
start_time,
file!(),
line!(),
column!(),
engine_state.get_config().use_ansi_coloring,
);
exit_code
}
fn evaluate_source(
engine_state: &mut EngineState,
stack: &mut Stack,
source: &[u8],
fname: &str,
input: PipelineData,
allow_return: bool,
) -> Result<Option<i32>, ShellError> {
let (block, delta) = {
let mut working_set = StateWorkingSet::new(engine_state);
let output = parse(
@ -224,104 +261,40 @@ pub fn eval_source(
}
if let Some(err) = working_set.parse_errors.first() {
set_last_exit_code(stack, 1);
report_error(&working_set, err);
return false;
return Ok(Some(1));
}
(output, working_set.render())
};
if let Err(err) = engine_state.merge_delta(delta) {
set_last_exit_code(stack, 1);
report_error_new(engine_state, &err);
return false;
}
engine_state.merge_delta(delta)?;
let b = if allow_return {
let pipeline = if allow_return {
eval_block_with_early_return::<WithoutDebug>(engine_state, stack, &block, input)
} else {
eval_block::<WithoutDebug>(engine_state, stack, &block, input)
}?;
let status = if let PipelineData::ByteStream(..) = pipeline {
pipeline.print(engine_state, stack, false, false)?
} else {
if let Some(hook) = engine_state.get_config().hooks.display_output.clone() {
let pipeline = eval_hook(
engine_state,
stack,
Some(pipeline),
vec![],
&hook,
"display_output",
)?;
pipeline.print(engine_state, stack, false, false)
} else {
pipeline.print(engine_state, stack, true, false)
}?
};
match b {
Ok(pipeline_data) => {
let config = engine_state.get_config();
let result;
if let PipelineData::ExternalStream {
stdout: stream,
stderr: stderr_stream,
exit_code,
..
} = pipeline_data
{
result = print_if_stream(stream, stderr_stream, false, exit_code);
} else if let Some(hook) = config.hooks.display_output.clone() {
match eval_hook(
engine_state,
stack,
Some(pipeline_data),
vec![],
&hook,
"display_output",
) {
Err(err) => {
result = Err(err);
}
Ok(val) => {
result = val.print(engine_state, stack, false, false);
}
}
} else {
result = pipeline_data.print(engine_state, stack, true, false);
}
match result {
Err(err) => {
let working_set = StateWorkingSet::new(engine_state);
report_error(&working_set, &err);
return false;
}
Ok(exit_code) => {
set_last_exit_code(stack, exit_code);
}
}
// reset vt processing, aka ansi because illbehaved externals can break it
#[cfg(windows)]
{
let _ = enable_vt_processing();
}
}
Err(err) => {
set_last_exit_code(stack, 1);
let working_set = StateWorkingSet::new(engine_state);
report_error(&working_set, &err);
return false;
}
}
perf(
&format!("eval_source {}", &fname),
start_time,
file!(),
line!(),
column!(),
engine_state.get_config().use_ansi_coloring,
);
true
}
fn set_last_exit_code(stack: &mut Stack, exit_code: i64) {
stack.add_env_var(
"LAST_EXIT_CODE".to_string(),
Value::int(exit_code, Span::unknown()),
);
Ok(status.map(|status| status.code()))
}
#[cfg(test)]

View File

@ -0,0 +1 @@
mod nu_highlight;

View File

@ -0,0 +1,7 @@
use nu_test_support::nu;
#[test]
fn nu_highlight_not_expr() {
let actual = nu!("'not false' | nu-highlight | ansi strip");
assert_eq!(actual.out, "not false");
}

View File

@ -6,7 +6,10 @@ use nu_parser::parse;
use nu_protocol::{debugger::WithoutDebug, engine::StateWorkingSet, PipelineData};
use reedline::{Completer, Suggestion};
use rstest::{fixture, rstest};
use std::path::PathBuf;
use std::{
path::{PathBuf, MAIN_SEPARATOR},
sync::Arc,
};
use support::{
completions_helpers::{new_partial_engine, new_quote_engine},
file, folder, match_suggestions, new_engine,
@ -22,7 +25,7 @@ fn completer() -> NuCompleter {
assert!(support::merge_input(record.as_bytes(), &mut engine, &mut stack, dir).is_ok());
// Instantiate a new completer
NuCompleter::new(std::sync::Arc::new(engine), stack)
NuCompleter::new(Arc::new(engine), Arc::new(stack))
}
#[fixture]
@ -36,7 +39,7 @@ fn completer_strings() -> NuCompleter {
assert!(support::merge_input(record.as_bytes(), &mut engine, &mut stack, dir).is_ok());
// Instantiate a new completer
NuCompleter::new(std::sync::Arc::new(engine), stack)
NuCompleter::new(Arc::new(engine), Arc::new(stack))
}
#[fixture]
@ -56,7 +59,7 @@ fn extern_completer() -> NuCompleter {
assert!(support::merge_input(record.as_bytes(), &mut engine, &mut stack, dir).is_ok());
// Instantiate a new completer
NuCompleter::new(std::sync::Arc::new(engine), stack)
NuCompleter::new(Arc::new(engine), Arc::new(stack))
}
#[fixture]
@ -79,14 +82,14 @@ fn custom_completer() -> NuCompleter {
assert!(support::merge_input(record.as_bytes(), &mut engine, &mut stack, dir).is_ok());
// Instantiate a new completer
NuCompleter::new(std::sync::Arc::new(engine), stack)
NuCompleter::new(Arc::new(engine), Arc::new(stack))
}
#[test]
fn variables_dollar_sign_with_varialblecompletion() {
let (_, _, engine, stack) = new_engine();
let mut completer = NuCompleter::new(std::sync::Arc::new(engine), stack);
let mut completer = NuCompleter::new(Arc::new(engine), Arc::new(stack));
let target_dir = "$ ";
let suggestions = completer.complete(target_dir, target_dir.len());
@ -138,7 +141,7 @@ fn dotnu_completions() {
let (_, _, engine, stack) = new_engine();
// Instantiate a new completer
let mut completer = NuCompleter::new(std::sync::Arc::new(engine), stack);
let mut completer = NuCompleter::new(Arc::new(engine), Arc::new(stack));
// Test source completion
let completion_str = "source-env ".to_string();
@ -217,10 +220,10 @@ fn file_completions() {
let (dir, dir_str, engine, stack) = new_engine();
// Instantiate a new completer
let mut completer = NuCompleter::new(std::sync::Arc::new(engine), stack);
let mut completer = NuCompleter::new(Arc::new(engine), Arc::new(stack));
// Test completions for the current folder
let target_dir = format!("cp {dir_str}");
let target_dir = format!("cp {dir_str}{MAIN_SEPARATOR}");
let suggestions = completer.complete(&target_dir, target_dir.len());
// Create the expected values
@ -265,7 +268,7 @@ fn partial_completions() {
let (dir, _, engine, stack) = new_partial_engine();
// Instantiate a new completer
let mut completer = NuCompleter::new(std::sync::Arc::new(engine), stack);
let mut completer = NuCompleter::new(Arc::new(engine), Arc::new(stack));
// Test completions for a folder's name
let target_dir = format!("cd {}", file(dir.join("pa")));
@ -289,6 +292,8 @@ fn partial_completions() {
// Create the expected values
let expected_paths: Vec<String> = vec![
file(dir.join("partial_a").join("have_ext.exe")),
file(dir.join("partial_a").join("have_ext.txt")),
file(dir.join("partial_a").join("hello")),
file(dir.join("partial_a").join("hola")),
file(dir.join("partial_b").join("hello_b")),
@ -307,6 +312,8 @@ fn partial_completions() {
// Create the expected values
let expected_paths: Vec<String> = vec![
file(dir.join("partial_a").join("anotherfile")),
file(dir.join("partial_a").join("have_ext.exe")),
file(dir.join("partial_a").join("have_ext.txt")),
file(dir.join("partial_a").join("hello")),
file(dir.join("partial_a").join("hola")),
file(dir.join("partial_b").join("hello_b")),
@ -334,7 +341,54 @@ fn partial_completions() {
let suggestions = completer.complete(&target_dir, target_dir.len());
// Create the expected values
let expected_paths: Vec<String> = vec![file(dir.join("final_partial").join("somefile"))];
let expected_paths: Vec<String> = vec![
file(
dir.join("partial_a")
.join("..")
.join("final_partial")
.join("somefile"),
),
file(
dir.join("partial_b")
.join("..")
.join("final_partial")
.join("somefile"),
),
file(
dir.join("partial_c")
.join("..")
.join("final_partial")
.join("somefile"),
),
];
// Match the results
match_suggestions(expected_paths, suggestions);
// Test completion for all files under directories whose names begin with "pa"
let file_str = file(dir.join("partial_a").join("have"));
let target_file = format!("rm {file_str}");
let suggestions = completer.complete(&target_file, target_file.len());
// Create the expected values
let expected_paths: Vec<String> = vec![
file(dir.join("partial_a").join("have_ext.exe")),
file(dir.join("partial_a").join("have_ext.txt")),
];
// Match the results
match_suggestions(expected_paths, suggestions);
// Test completion for all files under directories whose names begin with "pa"
let file_str = file(dir.join("partial_a").join("have_ext."));
let file_dir = format!("rm {file_str}");
let suggestions = completer.complete(&file_dir, file_dir.len());
// Create the expected values
let expected_paths: Vec<String> = vec![
file(dir.join("partial_a").join("have_ext.exe")),
file(dir.join("partial_a").join("have_ext.txt")),
];
// Match the results
match_suggestions(expected_paths, suggestions);
@ -344,7 +398,7 @@ fn partial_completions() {
fn command_ls_with_filecompletion() {
let (_, _, engine, stack) = new_engine();
let mut completer = NuCompleter::new(std::sync::Arc::new(engine), stack);
let mut completer = NuCompleter::new(Arc::new(engine), Arc::new(stack));
let target_dir = "ls ";
let suggestions = completer.complete(target_dir, target_dir.len());
@ -372,13 +426,20 @@ fn command_ls_with_filecompletion() {
".hidden_folder/".to_string(),
];
match_suggestions(expected_paths, suggestions);
let target_dir = "ls custom_completion.";
let suggestions = completer.complete(target_dir, target_dir.len());
let expected_paths: Vec<String> = vec!["custom_completion.nu".to_string()];
match_suggestions(expected_paths, suggestions)
}
#[test]
fn command_open_with_filecompletion() {
let (_, _, engine, stack) = new_engine();
let mut completer = NuCompleter::new(std::sync::Arc::new(engine), stack);
let mut completer = NuCompleter::new(Arc::new(engine), Arc::new(stack));
let target_dir = "open ";
let suggestions = completer.complete(target_dir, target_dir.len());
@ -406,6 +467,13 @@ fn command_open_with_filecompletion() {
".hidden_folder/".to_string(),
];
match_suggestions(expected_paths, suggestions);
let target_dir = "open custom_completion.";
let suggestions = completer.complete(target_dir, target_dir.len());
let expected_paths: Vec<String> = vec!["custom_completion.nu".to_string()];
match_suggestions(expected_paths, suggestions)
}
@ -413,7 +481,7 @@ fn command_open_with_filecompletion() {
fn command_rm_with_globcompletion() {
let (_, _, engine, stack) = new_engine();
let mut completer = NuCompleter::new(std::sync::Arc::new(engine), stack);
let mut completer = NuCompleter::new(Arc::new(engine), Arc::new(stack));
let target_dir = "rm ";
let suggestions = completer.complete(target_dir, target_dir.len());
@ -448,7 +516,7 @@ fn command_rm_with_globcompletion() {
fn command_cp_with_globcompletion() {
let (_, _, engine, stack) = new_engine();
let mut completer = NuCompleter::new(std::sync::Arc::new(engine), stack);
let mut completer = NuCompleter::new(Arc::new(engine), Arc::new(stack));
let target_dir = "cp ";
let suggestions = completer.complete(target_dir, target_dir.len());
@ -483,7 +551,7 @@ fn command_cp_with_globcompletion() {
fn command_save_with_filecompletion() {
let (_, _, engine, stack) = new_engine();
let mut completer = NuCompleter::new(std::sync::Arc::new(engine), stack);
let mut completer = NuCompleter::new(Arc::new(engine), Arc::new(stack));
let target_dir = "save ";
let suggestions = completer.complete(target_dir, target_dir.len());
@ -518,7 +586,7 @@ fn command_save_with_filecompletion() {
fn command_touch_with_filecompletion() {
let (_, _, engine, stack) = new_engine();
let mut completer = NuCompleter::new(std::sync::Arc::new(engine), stack);
let mut completer = NuCompleter::new(Arc::new(engine), Arc::new(stack));
let target_dir = "touch ";
let suggestions = completer.complete(target_dir, target_dir.len());
@ -553,7 +621,7 @@ fn command_touch_with_filecompletion() {
fn command_watch_with_filecompletion() {
let (_, _, engine, stack) = new_engine();
let mut completer = NuCompleter::new(std::sync::Arc::new(engine), stack);
let mut completer = NuCompleter::new(Arc::new(engine), Arc::new(stack));
let target_dir = "watch ";
let suggestions = completer.complete(target_dir, target_dir.len());
@ -588,7 +656,7 @@ fn command_watch_with_filecompletion() {
fn file_completion_quoted() {
let (_, _, engine, stack) = new_quote_engine();
let mut completer = NuCompleter::new(std::sync::Arc::new(engine), stack);
let mut completer = NuCompleter::new(Arc::new(engine), Arc::new(stack));
let target_dir = "open ";
let suggestions = completer.complete(target_dir, target_dir.len());
@ -626,7 +694,7 @@ fn flag_completions() {
let (_, _, engine, stack) = new_engine();
// Instantiate a new completer
let mut completer = NuCompleter::new(std::sync::Arc::new(engine), stack);
let mut completer = NuCompleter::new(Arc::new(engine), Arc::new(stack));
// Test completions for the 'ls' flags
let suggestions = completer.complete("ls -", 4);
@ -661,10 +729,10 @@ fn folder_with_directorycompletions() {
let (dir, dir_str, engine, stack) = new_engine();
// Instantiate a new completer
let mut completer = NuCompleter::new(std::sync::Arc::new(engine), stack);
let mut completer = NuCompleter::new(Arc::new(engine), Arc::new(stack));
// Test completions for the current folder
let target_dir = format!("cd {dir_str}");
let target_dir = format!("cd {dir_str}{MAIN_SEPARATOR}");
let suggestions = completer.complete(&target_dir, target_dir.len());
// Create the expected values
@ -690,16 +758,18 @@ fn variables_completions() {
assert!(support::merge_input(record.as_bytes(), &mut engine, &mut stack, dir).is_ok());
// Instantiate a new completer
let mut completer = NuCompleter::new(std::sync::Arc::new(engine), stack);
let mut completer = NuCompleter::new(Arc::new(engine), Arc::new(stack));
// Test completions for $nu
let suggestions = completer.complete("$nu.", 4);
assert_eq!(15, suggestions.len());
assert_eq!(18, suggestions.len());
let expected: Vec<String> = vec![
"cache-dir".into(),
"config-path".into(),
"current-exe".into(),
"data-dir".into(),
"default-config-dir".into(),
"env-path".into(),
"history-enabled".into(),
@ -713,6 +783,7 @@ fn variables_completions() {
"plugin-path".into(),
"startup-time".into(),
"temp-path".into(),
"vendor-autoload-dir".into(),
];
// Match results
@ -796,7 +867,7 @@ fn alias_of_command_and_flags() {
let alias = r#"alias ll = ls -l"#;
assert!(support::merge_input(alias.as_bytes(), &mut engine, &mut stack, dir).is_ok());
let mut completer = NuCompleter::new(std::sync::Arc::new(engine), stack);
let mut completer = NuCompleter::new(Arc::new(engine), Arc::new(stack));
let suggestions = completer.complete("ll t", 4);
#[cfg(windows)]
@ -815,7 +886,7 @@ fn alias_of_basic_command() {
let alias = r#"alias ll = ls "#;
assert!(support::merge_input(alias.as_bytes(), &mut engine, &mut stack, dir).is_ok());
let mut completer = NuCompleter::new(std::sync::Arc::new(engine), stack);
let mut completer = NuCompleter::new(Arc::new(engine), Arc::new(stack));
let suggestions = completer.complete("ll t", 4);
#[cfg(windows)]
@ -837,7 +908,7 @@ fn alias_of_another_alias() {
let alias = r#"alias lf = ll -f"#;
assert!(support::merge_input(alias.as_bytes(), &mut engine, &mut stack, dir).is_ok());
let mut completer = NuCompleter::new(std::sync::Arc::new(engine), stack);
let mut completer = NuCompleter::new(Arc::new(engine), Arc::new(stack));
let suggestions = completer.complete("lf t", 4);
#[cfg(windows)]
@ -871,7 +942,7 @@ fn run_external_completion(completer: &str, input: &str) -> Vec<Suggestion> {
assert!(engine_state.merge_env(&mut stack, &dir).is_ok());
// Instantiate a new completer
let mut completer = NuCompleter::new(std::sync::Arc::new(engine_state), stack);
let mut completer = NuCompleter::new(Arc::new(engine_state), Arc::new(stack));
completer.complete(input, input.len())
}
@ -880,7 +951,7 @@ fn run_external_completion(completer: &str, input: &str) -> Vec<Suggestion> {
fn unknown_command_completion() {
let (_, _, engine, stack) = new_engine();
let mut completer = NuCompleter::new(std::sync::Arc::new(engine), stack);
let mut completer = NuCompleter::new(Arc::new(engine), Arc::new(stack));
let target_dir = "thiscommanddoesnotexist ";
let suggestions = completer.complete(target_dir, target_dir.len());
@ -943,7 +1014,7 @@ fn flagcompletion_triggers_after_cursor_piped(mut completer: NuCompleter) {
fn filecompletions_triggers_after_cursor() {
let (_, _, engine, stack) = new_engine();
let mut completer = NuCompleter::new(std::sync::Arc::new(engine), stack);
let mut completer = NuCompleter::new(Arc::new(engine), Arc::new(stack));
let suggestions = completer.complete("cp test_c", 3);
@ -1052,7 +1123,7 @@ fn alias_offset_bug_7648() {
let alias = r#"alias ea = ^$env.EDITOR /tmp/test.s"#;
assert!(support::merge_input(alias.as_bytes(), &mut engine, &mut stack, dir).is_ok());
let mut completer = NuCompleter::new(std::sync::Arc::new(engine), stack);
let mut completer = NuCompleter::new(Arc::new(engine), Arc::new(stack));
// Issue #7648
// Nushell crashes when an alias name is shorter than the alias command
@ -1071,7 +1142,7 @@ fn alias_offset_bug_7754() {
let alias = r#"alias ll = ls -l"#;
assert!(support::merge_input(alias.as_bytes(), &mut engine, &mut stack, dir).is_ok());
let mut completer = NuCompleter::new(std::sync::Arc::new(engine), stack);
let mut completer = NuCompleter::new(Arc::new(engine), Arc::new(stack));
// Issue #7754
// Nushell crashes when an alias name is shorter than the alias command

View File

@ -3,14 +3,11 @@ use nu_parser::parse;
use nu_protocol::{
debugger::WithoutDebug,
engine::{EngineState, Stack, StateWorkingSet},
eval_const::create_nu_constant,
PipelineData, ShellError, Span, Value, NU_VARIABLE_ID,
PipelineData, ShellError, Span, Value,
};
use nu_test_support::fs;
use reedline::Suggestion;
use std::path::PathBuf;
const SEP: char = std::path::MAIN_SEPARATOR;
use std::path::{PathBuf, MAIN_SEPARATOR};
fn create_default_context() -> EngineState {
nu_command::add_shell_command_context(nu_cmd_lang::create_default_context())
@ -20,20 +17,17 @@ fn create_default_context() -> EngineState {
pub fn new_engine() -> (PathBuf, String, EngineState, Stack) {
// Target folder inside assets
let dir = fs::fixtures().join("completions");
let mut dir_str = dir
let dir_str = dir
.clone()
.into_os_string()
.into_string()
.unwrap_or_default();
dir_str.push(SEP);
// Create a new engine with default context
let mut engine_state = create_default_context();
// Add $nu
let nu_const =
create_nu_constant(&engine_state, Span::test_data()).expect("Failed creating $nu");
engine_state.set_variable_const_val(NU_VARIABLE_ID, nu_const);
engine_state.generate_nu_constant();
// New stack
let mut stack = Stack::new();
@ -77,12 +71,11 @@ pub fn new_engine() -> (PathBuf, String, EngineState, Stack) {
pub fn new_quote_engine() -> (PathBuf, String, EngineState, Stack) {
// Target folder inside assets
let dir = fs::fixtures().join("quoted_completions");
let mut dir_str = dir
let dir_str = dir
.clone()
.into_os_string()
.into_string()
.unwrap_or_default();
dir_str.push(SEP);
// Create a new engine with default context
let mut engine_state = create_default_context();
@ -113,12 +106,11 @@ pub fn new_quote_engine() -> (PathBuf, String, EngineState, Stack) {
pub fn new_partial_engine() -> (PathBuf, String, EngineState, Stack) {
// Target folder inside assets
let dir = fs::fixtures().join("partial_completions");
let mut dir_str = dir
let dir_str = dir
.clone()
.into_os_string()
.into_string()
.unwrap_or_default();
dir_str.push(SEP);
// Create a new engine with default context
let mut engine_state = create_default_context();
@ -165,7 +157,7 @@ pub fn match_suggestions(expected: Vec<String>, suggestions: Vec<Suggestion>) {
// append the separator to the converted path
pub fn folder(path: PathBuf) -> String {
let mut converted_path = file(path);
converted_path.push(SEP);
converted_path.push(MAIN_SEPARATOR);
converted_path
}

View File

@ -0,0 +1,2 @@
mod commands;
mod completions;

View File

@ -5,17 +5,17 @@ edition = "2021"
license = "MIT"
name = "nu-cmd-base"
repository = "https://github.com/nushell/nushell/tree/main/crates/nu-cmd-base"
version = "0.93.0"
version = "0.95.0"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
nu-engine = { path = "../nu-engine", version = "0.93.0" }
nu-parser = { path = "../nu-parser", version = "0.93.0" }
nu-path = { path = "../nu-path", version = "0.93.0" }
nu-protocol = { path = "../nu-protocol", version = "0.93.0" }
nu-engine = { path = "../nu-engine", version = "0.95.0" }
nu-parser = { path = "../nu-parser", version = "0.95.0" }
nu-path = { path = "../nu-path", version = "0.95.0" }
nu-protocol = { path = "../nu-protocol", version = "0.95.0" }
indexmap = { workspace = true }
miette = { workspace = true }
[dev-dependencies]
[dev-dependencies]

View File

@ -194,7 +194,7 @@ pub fn eval_hook(
let Some(follow) = val.get("code") else {
return Err(ShellError::CantFindColumn {
col_name: "code".into(),
span,
span: Some(span),
src_span: span,
});
};

View File

@ -1,6 +1,6 @@
use nu_protocol::{
engine::{EngineState, Stack, StateWorkingSet},
report_error, Range, ShellError, Span, Value,
engine::{EngineState, Stack},
Range, ShellError, Span, Value,
};
use std::{ops::Bound, path::PathBuf};
@ -13,15 +13,14 @@ pub fn get_init_cwd() -> PathBuf {
}
pub fn get_guaranteed_cwd(engine_state: &EngineState, stack: &Stack) -> PathBuf {
nu_engine::env::current_dir(engine_state, stack).unwrap_or_else(|e| {
let working_set = StateWorkingSet::new(engine_state);
report_error(&working_set, &e);
crate::util::get_init_cwd()
})
engine_state
.cwd(Some(stack))
.unwrap_or(crate::util::get_init_cwd())
}
type MakeRangeError = fn(&str, Span) -> ShellError;
/// Returns a inclusive pair of boundary in given `range`.
pub fn process_range(range: &Range) -> Result<(isize, isize), MakeRangeError> {
match range {
Range::IntRange(range) => {

View File

@ -1,75 +0,0 @@
[package]
authors = ["The Nushell Project Developers"]
description = "Nushell's dataframe commands based on polars."
edition = "2021"
license = "MIT"
name = "nu-cmd-dataframe"
repository = "https://github.com/nushell/nushell/tree/main/crates/nu-cmd-dataframe"
version = "0.93.0"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[lib]
bench = false
[dependencies]
nu-engine = { path = "../nu-engine", version = "0.93.0" }
nu-parser = { path = "../nu-parser", version = "0.93.0" }
nu-protocol = { path = "../nu-protocol", version = "0.93.0" }
# Potential dependencies for extras
chrono = { workspace = true, features = ["std", "unstable-locales"], default-features = false }
chrono-tz = { workspace = true }
fancy-regex = { workspace = true }
indexmap = { workspace = true }
num = { version = "0.4", optional = true }
serde = { workspace = true, features = ["derive"] }
# keep sqlparser at 0.39.0 until we can update polars
sqlparser = { version = "0.45", optional = true }
polars-io = { version = "0.39", features = ["avro"], optional = true }
polars-arrow = { version = "0.39", optional = true }
polars-ops = { version = "0.39", optional = true }
polars-plan = { version = "0.39", features = ["regex"], optional = true }
polars-utils = { version = "0.39", optional = true }
[dependencies.polars]
features = [
"arg_where",
"checked_arithmetic",
"concat_str",
"cross_join",
"csv",
"cum_agg",
"dtype-categorical",
"dtype-datetime",
"dtype-struct",
"dtype-i8",
"dtype-i16",
"dtype-u8",
"dtype-u16",
"dynamic_group_by",
"ipc",
"is_in",
"json",
"lazy",
"object",
"parquet",
"random",
"rolling_window",
"rows",
"serde",
"serde-lazy",
"strings",
"temporal",
"to_dummies",
]
default-features = false
optional = true
version = "0.39"
[features]
dataframe = ["num", "polars", "polars-io", "polars-arrow", "polars-ops", "polars-plan", "polars-utils", "sqlparser"]
default = []
[dev-dependencies]
nu-cmd-lang = { path = "../nu-cmd-lang", version = "0.93.0" }

View File

@ -1,12 +0,0 @@
# Dataframe
This dataframe directory holds all of the definitions of the dataframe data structures and commands.
There are three sections of commands:
* [eager](./eager)
* [series](./series)
* [values](./values)
For more details see the
[Nushell book section on dataframes](https://www.nushell.sh/book/dataframes.html)

View File

@ -1,134 +0,0 @@
use crate::dataframe::values::{Axis, Column, NuDataFrame};
use nu_engine::command_prelude::*;
#[derive(Clone)]
pub struct AppendDF;
impl Command for AppendDF {
fn name(&self) -> &str {
"dfr append"
}
fn usage(&self) -> &str {
"Appends a new dataframe."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.required("other", SyntaxShape::Any, "dataframe to be appended")
.switch("col", "appends in col orientation", Some('c'))
.input_output_type(
Type::Custom("dataframe".into()),
Type::Custom("dataframe".into()),
)
.category(Category::Custom("dataframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![
Example {
description: "Appends a dataframe as new columns",
example: r#"let a = ([[a b]; [1 2] [3 4]] | dfr into-df);
$a | dfr append $a"#,
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"a".to_string(),
vec![Value::test_int(1), Value::test_int(3)],
),
Column::new(
"b".to_string(),
vec![Value::test_int(2), Value::test_int(4)],
),
Column::new(
"a_x".to_string(),
vec![Value::test_int(1), Value::test_int(3)],
),
Column::new(
"b_x".to_string(),
vec![Value::test_int(2), Value::test_int(4)],
),
],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
Example {
description: "Appends a dataframe merging at the end of columns",
example: r#"let a = ([[a b]; [1 2] [3 4]] | dfr into-df);
$a | dfr append $a --col"#,
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"a".to_string(),
vec![
Value::test_int(1),
Value::test_int(3),
Value::test_int(1),
Value::test_int(3),
],
),
Column::new(
"b".to_string(),
vec![
Value::test_int(2),
Value::test_int(4),
Value::test_int(2),
Value::test_int(4),
],
),
],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
command(engine_state, stack, call, input)
}
}
fn command(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let other: Value = call.req(engine_state, stack, 0)?;
let axis = if call.has_flag(engine_state, stack, "col")? {
Axis::Column
} else {
Axis::Row
};
let df_other = NuDataFrame::try_from_value(other)?;
let df = NuDataFrame::try_from_pipeline(input, call.head)?;
df.append_df(&df_other, axis, call.head)
.map(|df| PipelineData::Value(NuDataFrame::into_value(df, call.head), None))
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
#[test]
fn test_examples() {
test_dataframe(vec![Box::new(AppendDF {})])
}
}

View File

@ -1,195 +0,0 @@
use crate::dataframe::values::{str_to_dtype, NuDataFrame, NuExpression, NuLazyFrame};
use nu_engine::command_prelude::*;
use polars::prelude::*;
#[derive(Clone)]
pub struct CastDF;
impl Command for CastDF {
fn name(&self) -> &str {
"dfr cast"
}
fn usage(&self) -> &str {
"Cast a column to a different dtype."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.input_output_types(vec![
(
Type::Custom("expression".into()),
Type::Custom("expression".into()),
),
(
Type::Custom("dataframe".into()),
Type::Custom("dataframe".into()),
),
])
.required(
"dtype",
SyntaxShape::String,
"The dtype to cast the column to",
)
.optional(
"column",
SyntaxShape::String,
"The column to cast. Required when used with a dataframe.",
)
.category(Category::Custom("dataframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![
Example {
description: "Cast a column in a dataframe to a different dtype",
example: "[[a b]; [1 2] [3 4]] | dfr into-df | dfr cast u8 a | dfr schema",
result: Some(Value::record(
record! {
"a" => Value::string("u8", Span::test_data()),
"b" => Value::string("i64", Span::test_data()),
},
Span::test_data(),
)),
},
Example {
description: "Cast a column in a lazy dataframe to a different dtype",
example: "[[a b]; [1 2] [3 4]] | dfr into-df | dfr into-lazy | dfr cast u8 a | dfr schema",
result: Some(Value::record(
record! {
"a" => Value::string("u8", Span::test_data()),
"b" => Value::string("i64", Span::test_data()),
},
Span::test_data(),
)),
},
Example {
description: "Cast a column in a expression to a different dtype",
example: r#"[[a b]; [1 2] [1 4]] | dfr into-df | dfr group-by a | dfr agg [ (dfr col b | dfr cast u8 | dfr min | dfr as "b_min") ] | dfr schema"#,
result: None
}
]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let value = input.into_value(call.head);
if NuLazyFrame::can_downcast(&value) {
let (dtype, column_nm) = df_args(engine_state, stack, call)?;
let df = NuLazyFrame::try_from_value(value)?;
command_lazy(call, column_nm, dtype, df)
} else if NuDataFrame::can_downcast(&value) {
let (dtype, column_nm) = df_args(engine_state, stack, call)?;
let df = NuDataFrame::try_from_value(value)?;
command_eager(call, column_nm, dtype, df)
} else {
let dtype: String = call.req(engine_state, stack, 0)?;
let dtype = str_to_dtype(&dtype, call.head)?;
let expr = NuExpression::try_from_value(value)?;
let expr: NuExpression = expr.into_polars().cast(dtype).into();
Ok(PipelineData::Value(
NuExpression::into_value(expr, call.head),
None,
))
}
}
}
fn df_args(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
) -> Result<(DataType, String), ShellError> {
let dtype = dtype_arg(engine_state, stack, call)?;
let column_nm: String =
call.opt(engine_state, stack, 1)?
.ok_or(ShellError::MissingParameter {
param_name: "column_name".into(),
span: call.head,
})?;
Ok((dtype, column_nm))
}
fn dtype_arg(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
) -> Result<DataType, ShellError> {
let dtype: String = call.req(engine_state, stack, 0)?;
str_to_dtype(&dtype, call.head)
}
fn command_lazy(
call: &Call,
column_nm: String,
dtype: DataType,
lazy: NuLazyFrame,
) -> Result<PipelineData, ShellError> {
let column = col(&column_nm).cast(dtype);
let lazy = lazy.into_polars().with_columns(&[column]);
let lazy = NuLazyFrame::new(false, lazy);
Ok(PipelineData::Value(
NuLazyFrame::into_value(lazy, call.head)?,
None,
))
}
fn command_eager(
call: &Call,
column_nm: String,
dtype: DataType,
nu_df: NuDataFrame,
) -> Result<PipelineData, ShellError> {
let mut df = nu_df.df;
let column = df
.column(&column_nm)
.map_err(|e| ShellError::GenericError {
error: format!("{e}"),
msg: "".into(),
span: Some(call.head),
help: None,
inner: vec![],
})?;
let casted = column.cast(&dtype).map_err(|e| ShellError::GenericError {
error: format!("{e}"),
msg: "".into(),
span: Some(call.head),
help: None,
inner: vec![],
})?;
let _ = df
.with_column(casted)
.map_err(|e| ShellError::GenericError {
error: format!("{e}"),
msg: "".into(),
span: Some(call.head),
help: None,
inner: vec![],
})?;
let df = NuDataFrame::new(false, df);
Ok(PipelineData::Value(df.into_value(call.head), None))
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
#[test]
fn test_examples() {
test_dataframe(vec![Box::new(CastDF {})])
}
}

View File

@ -1,73 +0,0 @@
use crate::dataframe::values::NuDataFrame;
use nu_engine::command_prelude::*;
#[derive(Clone)]
pub struct ColumnsDF;
impl Command for ColumnsDF {
fn name(&self) -> &str {
"dfr columns"
}
fn usage(&self) -> &str {
"Show dataframe columns."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.input_output_type(Type::Custom("dataframe".into()), Type::Any)
.category(Category::Custom("dataframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![Example {
description: "Dataframe columns",
example: "[[a b]; [1 2] [3 4]] | dfr into-df | dfr columns",
result: Some(Value::list(
vec![Value::test_string("a"), Value::test_string("b")],
Span::test_data(),
)),
}]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
command(engine_state, stack, call, input)
}
}
fn command(
_engine_state: &EngineState,
_stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let df = NuDataFrame::try_from_pipeline(input, call.head)?;
let names: Vec<Value> = df
.as_ref()
.get_column_names()
.iter()
.map(|v| Value::string(*v, call.head))
.collect();
let names = Value::list(names, call.head);
Ok(PipelineData::Value(names, None))
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
#[test]
fn test_examples() {
test_dataframe(vec![Box::new(ColumnsDF {})])
}
}

View File

@ -1,115 +0,0 @@
use crate::dataframe::values::{utils::convert_columns, Column, NuDataFrame};
use nu_engine::command_prelude::*;
#[derive(Clone)]
pub struct DropDF;
impl Command for DropDF {
fn name(&self) -> &str {
"dfr drop"
}
fn usage(&self) -> &str {
"Creates a new dataframe by dropping the selected columns."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.rest("rest", SyntaxShape::Any, "column names to be dropped")
.input_output_type(
Type::Custom("dataframe".into()),
Type::Custom("dataframe".into()),
)
.category(Category::Custom("dataframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![Example {
description: "drop column a",
example: "[[a b]; [1 2] [3 4]] | dfr into-df | dfr drop a",
result: Some(
NuDataFrame::try_from_columns(
vec![Column::new(
"b".to_string(),
vec![Value::test_int(2), Value::test_int(4)],
)],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
}]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
command(engine_state, stack, call, input)
}
}
fn command(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let columns: Vec<Value> = call.rest(engine_state, stack, 0)?;
let (col_string, col_span) = convert_columns(columns, call.head)?;
let df = NuDataFrame::try_from_pipeline(input, call.head)?;
let new_df = col_string
.first()
.ok_or_else(|| ShellError::GenericError {
error: "Empty names list".into(),
msg: "No column names were found".into(),
span: Some(col_span),
help: None,
inner: vec![],
})
.and_then(|col| {
df.as_ref()
.drop(&col.item)
.map_err(|e| ShellError::GenericError {
error: "Error dropping column".into(),
msg: e.to_string(),
span: Some(col.span),
help: None,
inner: vec![],
})
})?;
// If there are more columns in the drop selection list, these
// are added from the resulting dataframe
col_string
.iter()
.skip(1)
.try_fold(new_df, |new_df, col| {
new_df
.drop(&col.item)
.map_err(|e| ShellError::GenericError {
error: "Error dropping column".into(),
msg: e.to_string(),
span: Some(col.span),
help: None,
inner: vec![],
})
})
.map(|df| PipelineData::Value(NuDataFrame::dataframe_into_value(df, call.head), None))
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
#[test]
fn test_examples() {
test_dataframe(vec![Box::new(DropDF {})])
}
}

View File

@ -1,119 +0,0 @@
use crate::dataframe::values::{utils::convert_columns_string, Column, NuDataFrame};
use nu_engine::command_prelude::*;
use polars::prelude::UniqueKeepStrategy;
#[derive(Clone)]
pub struct DropDuplicates;
impl Command for DropDuplicates {
fn name(&self) -> &str {
"dfr drop-duplicates"
}
fn usage(&self) -> &str {
"Drops duplicate values in dataframe."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.optional(
"subset",
SyntaxShape::Table(vec![]),
"subset of columns to drop duplicates",
)
.switch("maintain", "maintain order", Some('m'))
.switch(
"last",
"keeps last duplicate value (by default keeps first)",
Some('l'),
)
.input_output_type(
Type::Custom("dataframe".into()),
Type::Custom("dataframe".into()),
)
.category(Category::Custom("dataframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![Example {
description: "drop duplicates",
example: "[[a b]; [1 2] [3 4] [1 2]] | dfr into-df | dfr drop-duplicates",
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"a".to_string(),
vec![Value::test_int(3), Value::test_int(1)],
),
Column::new(
"b".to_string(),
vec![Value::test_int(4), Value::test_int(2)],
),
],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
}]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
command(engine_state, stack, call, input)
}
}
fn command(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let columns: Option<Vec<Value>> = call.opt(engine_state, stack, 0)?;
let (subset, col_span) = match columns {
Some(cols) => {
let (agg_string, col_span) = convert_columns_string(cols, call.head)?;
(Some(agg_string), col_span)
}
None => (None, call.head),
};
let df = NuDataFrame::try_from_pipeline(input, call.head)?;
let subset_slice = subset.as_ref().map(|cols| &cols[..]);
let keep_strategy = if call.has_flag(engine_state, stack, "last")? {
UniqueKeepStrategy::Last
} else {
UniqueKeepStrategy::First
};
df.as_ref()
.unique(subset_slice, keep_strategy, None)
.map_err(|e| ShellError::GenericError {
error: "Error dropping duplicates".into(),
msg: e.to_string(),
span: Some(col_span),
help: None,
inner: vec![],
})
.map(|df| PipelineData::Value(NuDataFrame::dataframe_into_value(df, call.head), None))
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
#[test]
fn test_examples() {
test_dataframe(vec![Box::new(DropDuplicates {})])
}
}

View File

@ -1,137 +0,0 @@
use crate::dataframe::values::{utils::convert_columns_string, Column, NuDataFrame};
use nu_engine::command_prelude::*;
#[derive(Clone)]
pub struct DropNulls;
impl Command for DropNulls {
fn name(&self) -> &str {
"dfr drop-nulls"
}
fn usage(&self) -> &str {
"Drops null values in dataframe."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.optional(
"subset",
SyntaxShape::Table(vec![]),
"subset of columns to drop nulls",
)
.input_output_type(
Type::Custom("dataframe".into()),
Type::Custom("dataframe".into()),
)
.category(Category::Custom("dataframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![
Example {
description: "drop null values in dataframe",
example: r#"let df = ([[a b]; [1 2] [3 0] [1 2]] | dfr into-df);
let res = ($df.b / $df.b);
let a = ($df | dfr with-column $res --name res);
$a | dfr drop-nulls"#,
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"a".to_string(),
vec![Value::test_int(1), Value::test_int(1)],
),
Column::new(
"b".to_string(),
vec![Value::test_int(2), Value::test_int(2)],
),
Column::new(
"res".to_string(),
vec![Value::test_int(1), Value::test_int(1)],
),
],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
Example {
description: "drop null values in dataframe",
example: r#"let s = ([1 2 0 0 3 4] | dfr into-df);
($s / $s) | dfr drop-nulls"#,
result: Some(
NuDataFrame::try_from_columns(
vec![Column::new(
"div_0_0".to_string(),
vec![
Value::test_int(1),
Value::test_int(1),
Value::test_int(1),
Value::test_int(1),
],
)],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
command(engine_state, stack, call, input)
}
}
fn command(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let df = NuDataFrame::try_from_pipeline(input, call.head)?;
let columns: Option<Vec<Value>> = call.opt(engine_state, stack, 0)?;
let (subset, col_span) = match columns {
Some(cols) => {
let (agg_string, col_span) = convert_columns_string(cols, call.head)?;
(Some(agg_string), col_span)
}
None => (None, call.head),
};
let subset_slice = subset.as_ref().map(|cols| &cols[..]);
df.as_ref()
.drop_nulls(subset_slice)
.map_err(|e| ShellError::GenericError {
error: "Error dropping nulls".into(),
msg: e.to_string(),
span: Some(col_span),
help: None,
inner: vec![],
})
.map(|df| PipelineData::Value(NuDataFrame::dataframe_into_value(df, call.head), None))
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::super::WithColumn;
use super::*;
#[test]
fn test_examples() {
test_dataframe(vec![Box::new(DropNulls {}), Box::new(WithColumn {})])
}
}

View File

@ -1,104 +0,0 @@
use crate::dataframe::values::{Column, NuDataFrame};
use nu_engine::command_prelude::*;
#[derive(Clone)]
pub struct DataTypes;
impl Command for DataTypes {
fn name(&self) -> &str {
"dfr dtypes"
}
fn usage(&self) -> &str {
"Show dataframe data types."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.input_output_type(
Type::Custom("dataframe".into()),
Type::Custom("dataframe".into()),
)
.category(Category::Custom("dataframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![Example {
description: "Dataframe dtypes",
example: "[[a b]; [1 2] [3 4]] | dfr into-df | dfr dtypes",
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"column".to_string(),
vec![Value::test_string("a"), Value::test_string("b")],
),
Column::new(
"dtype".to_string(),
vec![Value::test_string("i64"), Value::test_string("i64")],
),
],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
}]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
command(engine_state, stack, call, input)
}
}
fn command(
_engine_state: &EngineState,
_stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let df = NuDataFrame::try_from_pipeline(input, call.head)?;
let mut dtypes: Vec<Value> = Vec::new();
let names: Vec<Value> = df
.as_ref()
.get_column_names()
.iter()
.map(|v| {
let dtype = df
.as_ref()
.column(v)
.expect("using name from list of names from dataframe")
.dtype();
let dtype_str = dtype.to_string();
dtypes.push(Value::string(dtype_str, call.head));
Value::string(*v, call.head)
})
.collect();
let names_col = Column::new("column".to_string(), names);
let dtypes_col = Column::new("dtype".to_string(), dtypes);
NuDataFrame::try_from_columns(vec![names_col, dtypes_col], None)
.map(|df| PipelineData::Value(df.into_value(call.head), None))
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
#[test]
fn test_examples() {
test_dataframe(vec![Box::new(DataTypes {})])
}
}

View File

@ -1,107 +0,0 @@
use crate::dataframe::values::NuDataFrame;
use nu_engine::command_prelude::*;
use polars::{prelude::*, series::Series};
#[derive(Clone)]
pub struct Dummies;
impl Command for Dummies {
fn name(&self) -> &str {
"dfr dummies"
}
fn usage(&self) -> &str {
"Creates a new dataframe with dummy variables."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.switch("drop-first", "Drop first row", Some('d'))
.input_output_type(
Type::Custom("dataframe".into()),
Type::Custom("dataframe".into()),
)
.category(Category::Custom("dataframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![
Example {
description: "Create new dataframe with dummy variables from a dataframe",
example: "[[a b]; [1 2] [3 4]] | dfr into-df | dfr dummies",
result: Some(
NuDataFrame::try_from_series(
vec![
Series::new("a_1", &[1_u8, 0]),
Series::new("a_3", &[0_u8, 1]),
Series::new("b_2", &[1_u8, 0]),
Series::new("b_4", &[0_u8, 1]),
],
Span::test_data(),
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
Example {
description: "Create new dataframe with dummy variables from a series",
example: "[1 2 2 3 3] | dfr into-df | dfr dummies",
result: Some(
NuDataFrame::try_from_series(
vec![
Series::new("0_1", &[1_u8, 0, 0, 0, 0]),
Series::new("0_2", &[0_u8, 1, 1, 0, 0]),
Series::new("0_3", &[0_u8, 0, 0, 1, 1]),
],
Span::test_data(),
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
command(engine_state, stack, call, input)
}
}
fn command(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let drop_first: bool = call.has_flag(engine_state, stack, "drop-first")?;
let df = NuDataFrame::try_from_pipeline(input, call.head)?;
df.as_ref()
.to_dummies(None, drop_first)
.map_err(|e| ShellError::GenericError {
error: "Error calculating dummies".into(),
msg: e.to_string(),
span: Some(call.head),
help: Some("The only allowed column types for dummies are String or Int".into()),
inner: vec![],
})
.map(|df| PipelineData::Value(NuDataFrame::dataframe_into_value(df, call.head), None))
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
#[test]
fn test_examples() {
test_dataframe(vec![Box::new(Dummies {})])
}
}

View File

@ -1,155 +0,0 @@
use crate::dataframe::values::{Column, NuDataFrame, NuExpression, NuLazyFrame};
use nu_engine::command_prelude::*;
use polars::prelude::LazyFrame;
#[derive(Clone)]
pub struct FilterWith;
impl Command for FilterWith {
fn name(&self) -> &str {
"dfr filter-with"
}
fn usage(&self) -> &str {
"Filters dataframe using a mask or expression as reference."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.required(
"mask or expression",
SyntaxShape::Any,
"boolean mask used to filter data",
)
.input_output_type(
Type::Custom("dataframe".into()),
Type::Custom("dataframe".into()),
)
.category(Category::Custom("dataframe or lazyframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![
Example {
description: "Filter dataframe using a bool mask",
example: r#"let mask = ([true false] | dfr into-df);
[[a b]; [1 2] [3 4]] | dfr into-df | dfr filter-with $mask"#,
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new("a".to_string(), vec![Value::test_int(1)]),
Column::new("b".to_string(), vec![Value::test_int(2)]),
],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
Example {
description: "Filter dataframe using an expression",
example: "[[a b]; [1 2] [3 4]] | dfr into-df | dfr filter-with ((dfr col a) > 1)",
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new("a".to_string(), vec![Value::test_int(3)]),
Column::new("b".to_string(), vec![Value::test_int(4)]),
],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let value = input.into_value(call.head);
if NuLazyFrame::can_downcast(&value) {
let df = NuLazyFrame::try_from_value(value)?;
command_lazy(engine_state, stack, call, df)
} else {
let df = NuDataFrame::try_from_value(value)?;
command_eager(engine_state, stack, call, df)
}
}
}
fn command_eager(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
df: NuDataFrame,
) -> Result<PipelineData, ShellError> {
let mask_value: Value = call.req(engine_state, stack, 0)?;
let mask_span = mask_value.span();
if NuExpression::can_downcast(&mask_value) {
let expression = NuExpression::try_from_value(mask_value)?;
let lazy = NuLazyFrame::new(true, df.lazy());
let lazy = lazy.apply_with_expr(expression, LazyFrame::filter);
Ok(PipelineData::Value(
NuLazyFrame::into_value(lazy, call.head)?,
None,
))
} else {
let mask = NuDataFrame::try_from_value(mask_value)?.as_series(mask_span)?;
let mask = mask.bool().map_err(|e| ShellError::GenericError {
error: "Error casting to bool".into(),
msg: e.to_string(),
span: Some(mask_span),
help: Some("Perhaps you want to use a series with booleans as mask".into()),
inner: vec![],
})?;
df.as_ref()
.filter(mask)
.map_err(|e| ShellError::GenericError {
error: "Error filtering dataframe".into(),
msg: e.to_string(),
span: Some(call.head),
help: Some("The only allowed column types for dummies are String or Int".into()),
inner: vec![],
})
.map(|df| PipelineData::Value(NuDataFrame::dataframe_into_value(df, call.head), None))
}
}
fn command_lazy(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
lazy: NuLazyFrame,
) -> Result<PipelineData, ShellError> {
let expr: Value = call.req(engine_state, stack, 0)?;
let expr = NuExpression::try_from_value(expr)?;
let lazy = lazy.apply_with_expr(expr, LazyFrame::filter);
Ok(PipelineData::Value(
NuLazyFrame::into_value(lazy, call.head)?,
None,
))
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
use crate::dataframe::expressions::ExprCol;
#[test]
fn test_examples() {
test_dataframe(vec![Box::new(FilterWith {}), Box::new(ExprCol {})])
}
}

View File

@ -1,144 +0,0 @@
use crate::dataframe::values::{Column, NuDataFrame, NuExpression};
use nu_engine::command_prelude::*;
#[derive(Clone)]
pub struct FirstDF;
impl Command for FirstDF {
fn name(&self) -> &str {
"dfr first"
}
fn usage(&self) -> &str {
"Show only the first number of rows or create a first expression"
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.optional(
"rows",
SyntaxShape::Int,
"starting from the front, the number of rows to return",
)
.input_output_types(vec![
(
Type::Custom("expression".into()),
Type::Custom("expression".into()),
),
(
Type::Custom("dataframe".into()),
Type::Custom("dataframe".into()),
),
])
.category(Category::Custom("dataframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![
Example {
description: "Return the first row of a dataframe",
example: "[[a b]; [1 2] [3 4]] | dfr into-df | dfr first",
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new("a".to_string(), vec![Value::test_int(1)]),
Column::new("b".to_string(), vec![Value::test_int(2)]),
],
None,
)
.expect("should not fail")
.into_value(Span::test_data()),
),
},
Example {
description: "Return the first two rows of a dataframe",
example: "[[a b]; [1 2] [3 4]] | dfr into-df | dfr first 2",
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"a".to_string(),
vec![Value::test_int(1), Value::test_int(3)],
),
Column::new(
"b".to_string(),
vec![Value::test_int(2), Value::test_int(4)],
),
],
None,
)
.expect("should not fail")
.into_value(Span::test_data()),
),
},
Example {
description: "Creates a first expression from a column",
example: "dfr col a | dfr first",
result: None,
},
]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let value = input.into_value(call.head);
if NuDataFrame::can_downcast(&value) {
let df = NuDataFrame::try_from_value(value)?;
command(engine_state, stack, call, df)
} else {
let expr = NuExpression::try_from_value(value)?;
let expr: NuExpression = expr.into_polars().first().into();
Ok(PipelineData::Value(
NuExpression::into_value(expr, call.head),
None,
))
}
}
}
fn command(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
df: NuDataFrame,
) -> Result<PipelineData, ShellError> {
let rows: Option<usize> = call.opt(engine_state, stack, 0)?;
let rows = rows.unwrap_or(1);
let res = df.as_ref().head(Some(rows));
Ok(PipelineData::Value(
NuDataFrame::dataframe_into_value(res, call.head),
None,
))
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::{build_test_engine_state, test_dataframe_example};
use super::*;
use crate::dataframe::lazy::aggregate::LazyAggregate;
use crate::dataframe::lazy::groupby::ToLazyGroupBy;
#[test]
fn test_examples_dataframe() {
let mut engine_state = build_test_engine_state(vec![Box::new(FirstDF {})]);
test_dataframe_example(&mut engine_state, &FirstDF.examples()[0]);
test_dataframe_example(&mut engine_state, &FirstDF.examples()[1]);
}
#[test]
fn test_examples_expression() {
let mut engine_state = build_test_engine_state(vec![
Box::new(FirstDF {}),
Box::new(LazyAggregate {}),
Box::new(ToLazyGroupBy {}),
]);
test_dataframe_example(&mut engine_state, &FirstDF.examples()[2]);
}
}

View File

@ -1,87 +0,0 @@
use crate::dataframe::values::{utils::convert_columns_string, Column, NuDataFrame};
use nu_engine::command_prelude::*;
#[derive(Clone)]
pub struct GetDF;
impl Command for GetDF {
fn name(&self) -> &str {
"dfr get"
}
fn usage(&self) -> &str {
"Creates dataframe with the selected columns."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.rest("rest", SyntaxShape::Any, "column names to sort dataframe")
.input_output_type(
Type::Custom("dataframe".into()),
Type::Custom("dataframe".into()),
)
.category(Category::Custom("dataframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![Example {
description: "Returns the selected column",
example: "[[a b]; [1 2] [3 4]] | dfr into-df | dfr get a",
result: Some(
NuDataFrame::try_from_columns(
vec![Column::new(
"a".to_string(),
vec![Value::test_int(1), Value::test_int(3)],
)],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
}]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
command(engine_state, stack, call, input)
}
}
fn command(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let columns: Vec<Value> = call.rest(engine_state, stack, 0)?;
let (col_string, col_span) = convert_columns_string(columns, call.head)?;
let df = NuDataFrame::try_from_pipeline(input, call.head)?;
df.as_ref()
.select(col_string)
.map_err(|e| ShellError::GenericError {
error: "Error selecting columns".into(),
msg: e.to_string(),
span: Some(col_span),
help: None,
inner: vec![],
})
.map(|df| PipelineData::Value(NuDataFrame::dataframe_into_value(df, call.head), None))
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
#[test]
fn test_examples() {
test_dataframe(vec![Box::new(GetDF {})])
}
}

View File

@ -1,118 +0,0 @@
use crate::dataframe::values::{utils::DEFAULT_ROWS, Column, NuDataFrame, NuExpression};
use nu_engine::command_prelude::*;
#[derive(Clone)]
pub struct LastDF;
impl Command for LastDF {
fn name(&self) -> &str {
"dfr last"
}
fn usage(&self) -> &str {
"Creates new dataframe with tail rows or creates a last expression."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.optional("rows", SyntaxShape::Int, "Number of rows for tail")
.input_output_types(vec![
(
Type::Custom("expression".into()),
Type::Custom("expression".into()),
),
(
Type::Custom("dataframe".into()),
Type::Custom("dataframe".into()),
),
])
.category(Category::Custom("dataframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![
Example {
description: "Create new dataframe with last rows",
example: "[[a b]; [1 2] [3 4]] | dfr into-df | dfr last 1",
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new("a".to_string(), vec![Value::test_int(3)]),
Column::new("b".to_string(), vec![Value::test_int(4)]),
],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
Example {
description: "Creates a last expression from a column",
example: "dfr col a | dfr last",
result: None,
},
]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let value = input.into_value(call.head);
if NuDataFrame::can_downcast(&value) {
let df = NuDataFrame::try_from_value(value)?;
command(engine_state, stack, call, df)
} else {
let expr = NuExpression::try_from_value(value)?;
let expr: NuExpression = expr.into_polars().last().into();
Ok(PipelineData::Value(
NuExpression::into_value(expr, call.head),
None,
))
}
}
}
fn command(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
df: NuDataFrame,
) -> Result<PipelineData, ShellError> {
let rows: Option<usize> = call.opt(engine_state, stack, 0)?;
let rows = rows.unwrap_or(DEFAULT_ROWS);
let res = df.as_ref().tail(Some(rows));
Ok(PipelineData::Value(
NuDataFrame::dataframe_into_value(res, call.head),
None,
))
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::{build_test_engine_state, test_dataframe_example};
use super::*;
use crate::dataframe::lazy::aggregate::LazyAggregate;
use crate::dataframe::lazy::groupby::ToLazyGroupBy;
#[test]
fn test_examples_dataframe() {
let mut engine_state = build_test_engine_state(vec![Box::new(LastDF {})]);
test_dataframe_example(&mut engine_state, &LastDF.examples()[0]);
}
#[test]
fn test_examples_expression() {
let mut engine_state = build_test_engine_state(vec![
Box::new(LastDF {}),
Box::new(LazyAggregate {}),
Box::new(ToLazyGroupBy {}),
]);
test_dataframe_example(&mut engine_state, &LastDF.examples()[1]);
}
}

View File

@ -1,68 +0,0 @@
use crate::dataframe::values::NuDataFrame;
use nu_engine::command_prelude::*;
#[derive(Clone)]
pub struct ListDF;
impl Command for ListDF {
fn name(&self) -> &str {
"dfr ls"
}
fn usage(&self) -> &str {
"Lists stored dataframes."
}
fn signature(&self) -> Signature {
Signature::build(self.name()).category(Category::Custom("dataframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![Example {
description: "Creates a new dataframe and shows it in the dataframe list",
example: r#"let test = ([[a b];[1 2] [3 4]] | dfr into-df);
ls"#,
result: None,
}]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
_input: PipelineData,
) -> Result<PipelineData, ShellError> {
let mut vals: Vec<(String, Value)> = vec![];
for overlay_frame in engine_state.active_overlays(&[]) {
for var in &overlay_frame.vars {
if let Ok(value) = stack.get_var(*var.1, call.head) {
let name = String::from_utf8_lossy(var.0).to_string();
vals.push((name, value));
}
}
}
let vals = vals
.into_iter()
.filter_map(|(name, value)| {
NuDataFrame::try_from_value(value).ok().map(|df| (name, df))
})
.map(|(name, df)| {
Value::record(
record! {
"name" => Value::string(name, call.head),
"columns" => Value::int(df.as_ref().width() as i64, call.head),
"rows" => Value::int(df.as_ref().height() as i64, call.head),
},
call.head,
)
})
.collect::<Vec<Value>>();
let list = Value::list(vals, call.head);
Ok(list.into_pipeline_data())
}
}

View File

@ -1,248 +0,0 @@
use crate::dataframe::values::{utils::convert_columns_string, Column, NuDataFrame};
use nu_engine::command_prelude::*;
#[derive(Clone)]
pub struct MeltDF;
impl Command for MeltDF {
fn name(&self) -> &str {
"dfr melt"
}
fn usage(&self) -> &str {
"Unpivot a DataFrame from wide to long format."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.required_named(
"columns",
SyntaxShape::Table(vec![]),
"column names for melting",
Some('c'),
)
.required_named(
"values",
SyntaxShape::Table(vec![]),
"column names used as value columns",
Some('v'),
)
.named(
"variable-name",
SyntaxShape::String,
"optional name for variable column",
Some('r'),
)
.named(
"value-name",
SyntaxShape::String,
"optional name for value column",
Some('l'),
)
.input_output_type(
Type::Custom("dataframe".into()),
Type::Custom("dataframe".into()),
)
.category(Category::Custom("dataframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![Example {
description: "melt dataframe",
example:
"[[a b c d]; [x 1 4 a] [y 2 5 b] [z 3 6 c]] | dfr into-df | dfr melt -c [b c] -v [a d]",
result: Some(
NuDataFrame::try_from_columns(vec![
Column::new(
"b".to_string(),
vec![
Value::test_int(1),
Value::test_int(2),
Value::test_int(3),
Value::test_int(1),
Value::test_int(2),
Value::test_int(3),
],
),
Column::new(
"c".to_string(),
vec![
Value::test_int(4),
Value::test_int(5),
Value::test_int(6),
Value::test_int(4),
Value::test_int(5),
Value::test_int(6),
],
),
Column::new(
"variable".to_string(),
vec![
Value::test_string("a"),
Value::test_string("a"),
Value::test_string("a"),
Value::test_string("d"),
Value::test_string("d"),
Value::test_string("d"),
],
),
Column::new(
"value".to_string(),
vec![
Value::test_string("x"),
Value::test_string("y"),
Value::test_string("z"),
Value::test_string("a"),
Value::test_string("b"),
Value::test_string("c"),
],
),
], None)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
}]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
command(engine_state, stack, call, input)
}
}
fn command(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let id_col: Vec<Value> = call
.get_flag(engine_state, stack, "columns")?
.expect("required value");
let val_col: Vec<Value> = call
.get_flag(engine_state, stack, "values")?
.expect("required value");
let value_name: Option<Spanned<String>> = call.get_flag(engine_state, stack, "value-name")?;
let variable_name: Option<Spanned<String>> =
call.get_flag(engine_state, stack, "variable-name")?;
let (id_col_string, id_col_span) = convert_columns_string(id_col, call.head)?;
let (val_col_string, val_col_span) = convert_columns_string(val_col, call.head)?;
let df = NuDataFrame::try_from_pipeline(input, call.head)?;
check_column_datatypes(df.as_ref(), &id_col_string, id_col_span)?;
check_column_datatypes(df.as_ref(), &val_col_string, val_col_span)?;
let mut res = df
.as_ref()
.melt(&id_col_string, &val_col_string)
.map_err(|e| ShellError::GenericError {
error: "Error calculating melt".into(),
msg: e.to_string(),
span: Some(call.head),
help: None,
inner: vec![],
})?;
if let Some(name) = &variable_name {
res.rename("variable", &name.item)
.map_err(|e| ShellError::GenericError {
error: "Error renaming column".into(),
msg: e.to_string(),
span: Some(name.span),
help: None,
inner: vec![],
})?;
}
if let Some(name) = &value_name {
res.rename("value", &name.item)
.map_err(|e| ShellError::GenericError {
error: "Error renaming column".into(),
msg: e.to_string(),
span: Some(name.span),
help: None,
inner: vec![],
})?;
}
Ok(PipelineData::Value(
NuDataFrame::dataframe_into_value(res, call.head),
None,
))
}
fn check_column_datatypes<T: AsRef<str>>(
df: &polars::prelude::DataFrame,
cols: &[T],
col_span: Span,
) -> Result<(), ShellError> {
if cols.is_empty() {
return Err(ShellError::GenericError {
error: "Merge error".into(),
msg: "empty column list".into(),
span: Some(col_span),
help: None,
inner: vec![],
});
}
// Checking if they are same type
if cols.len() > 1 {
for w in cols.windows(2) {
let l_series = df
.column(w[0].as_ref())
.map_err(|e| ShellError::GenericError {
error: "Error selecting columns".into(),
msg: e.to_string(),
span: Some(col_span),
help: None,
inner: vec![],
})?;
let r_series = df
.column(w[1].as_ref())
.map_err(|e| ShellError::GenericError {
error: "Error selecting columns".into(),
msg: e.to_string(),
span: Some(col_span),
help: None,
inner: vec![],
})?;
if l_series.dtype() != r_series.dtype() {
return Err(ShellError::GenericError {
error: "Merge error".into(),
msg: "found different column types in list".into(),
span: Some(col_span),
help: Some(format!(
"datatypes {} and {} are incompatible",
l_series.dtype(),
r_series.dtype()
)),
inner: vec![],
});
}
}
}
Ok(())
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
#[test]
fn test_examples() {
test_dataframe(vec![Box::new(MeltDF {})])
}
}

View File

@ -1,114 +0,0 @@
mod append;
mod cast;
mod columns;
mod drop;
mod drop_duplicates;
mod drop_nulls;
mod dtypes;
mod dummies;
mod filter_with;
mod first;
mod get;
mod last;
mod list;
mod melt;
mod open;
mod query_df;
mod rename;
mod sample;
mod schema;
mod shape;
mod slice;
mod sql_context;
mod sql_expr;
mod summary;
mod take;
mod to_arrow;
mod to_avro;
mod to_csv;
mod to_df;
mod to_json_lines;
mod to_nu;
mod to_parquet;
mod with_column;
use nu_protocol::engine::StateWorkingSet;
pub use self::open::OpenDataFrame;
pub use append::AppendDF;
pub use cast::CastDF;
pub use columns::ColumnsDF;
pub use drop::DropDF;
pub use drop_duplicates::DropDuplicates;
pub use drop_nulls::DropNulls;
pub use dtypes::DataTypes;
pub use dummies::Dummies;
pub use filter_with::FilterWith;
pub use first::FirstDF;
pub use get::GetDF;
pub use last::LastDF;
pub use list::ListDF;
pub use melt::MeltDF;
pub use query_df::QueryDf;
pub use rename::RenameDF;
pub use sample::SampleDF;
pub use schema::SchemaDF;
pub use shape::ShapeDF;
pub use slice::SliceDF;
pub use sql_context::SQLContext;
pub use summary::Summary;
pub use take::TakeDF;
pub use to_arrow::ToArrow;
pub use to_avro::ToAvro;
pub use to_csv::ToCSV;
pub use to_df::ToDataFrame;
pub use to_json_lines::ToJsonLines;
pub use to_nu::ToNu;
pub use to_parquet::ToParquet;
pub use with_column::WithColumn;
pub fn add_eager_decls(working_set: &mut StateWorkingSet) {
macro_rules! bind_command {
( $command:expr ) => {
working_set.add_decl(Box::new($command));
};
( $( $command:expr ),* ) => {
$( working_set.add_decl(Box::new($command)); )*
};
}
// Dataframe commands
bind_command!(
AppendDF,
CastDF,
ColumnsDF,
DataTypes,
Summary,
DropDF,
DropDuplicates,
DropNulls,
Dummies,
FilterWith,
FirstDF,
GetDF,
LastDF,
ListDF,
MeltDF,
OpenDataFrame,
QueryDf,
RenameDF,
SampleDF,
SchemaDF,
ShapeDF,
SliceDF,
TakeDF,
ToArrow,
ToAvro,
ToCSV,
ToDataFrame,
ToNu,
ToParquet,
ToJsonLines,
WithColumn
);
}

View File

@ -1,518 +0,0 @@
use crate::dataframe::values::{NuDataFrame, NuLazyFrame, NuSchema};
use nu_engine::command_prelude::*;
use polars::prelude::{
CsvEncoding, CsvReader, IpcReader, JsonFormat, JsonReader, LazyCsvReader, LazyFileListReader,
LazyFrame, ParallelStrategy, ParquetReader, ScanArgsIpc, ScanArgsParquet, SerReader,
};
use polars_io::{avro::AvroReader, HiveOptions};
use std::{fs::File, io::BufReader, path::PathBuf};
#[derive(Clone)]
pub struct OpenDataFrame;
impl Command for OpenDataFrame {
fn name(&self) -> &str {
"dfr open"
}
fn usage(&self) -> &str {
"Opens CSV, JSON, JSON lines, arrow, avro, or parquet file to create dataframe."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.required(
"file",
SyntaxShape::Filepath,
"file path to load values from",
)
.switch("lazy", "creates a lazy dataframe", Some('l'))
.named(
"type",
SyntaxShape::String,
"File type: csv, tsv, json, parquet, arrow, avro. If omitted, derive from file extension",
Some('t'),
)
.named(
"delimiter",
SyntaxShape::String,
"file delimiter character. CSV file",
Some('d'),
)
.switch(
"no-header",
"Indicates if file doesn't have header. CSV file",
None,
)
.named(
"infer-schema",
SyntaxShape::Number,
"Number of rows to infer the schema of the file. CSV file",
None,
)
.named(
"skip-rows",
SyntaxShape::Number,
"Number of rows to skip from file. CSV file",
None,
)
.named(
"columns",
SyntaxShape::List(Box::new(SyntaxShape::String)),
"Columns to be selected from csv file. CSV and Parquet file",
None,
)
.named(
"schema",
SyntaxShape::Record(vec![]),
r#"Polars Schema in format [{name: str}]. CSV, JSON, and JSONL files"#,
Some('s')
)
.input_output_type(Type::Any, Type::Custom("dataframe".into()))
.category(Category::Custom("dataframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![Example {
description: "Takes a file name and creates a dataframe",
example: "dfr open test.csv",
result: None,
}]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
_input: PipelineData,
) -> Result<PipelineData, ShellError> {
command(engine_state, stack, call)
}
}
fn command(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
) -> Result<PipelineData, ShellError> {
let file: Spanned<PathBuf> = call.req(engine_state, stack, 0)?;
let type_option: Option<Spanned<String>> = call.get_flag(engine_state, stack, "type")?;
let type_id = match &type_option {
Some(ref t) => Some((t.item.to_owned(), "Invalid type", t.span)),
None => file.item.extension().map(|e| {
(
e.to_string_lossy().into_owned(),
"Invalid extension",
file.span,
)
}),
};
match type_id {
Some((e, msg, blamed)) => match e.as_str() {
"csv" | "tsv" => from_csv(engine_state, stack, call),
"parquet" | "parq" => from_parquet(engine_state, stack, call),
"ipc" | "arrow" => from_ipc(engine_state, stack, call),
"json" => from_json(engine_state, stack, call),
"jsonl" => from_jsonl(engine_state, stack, call),
"avro" => from_avro(engine_state, stack, call),
_ => Err(ShellError::FileNotFoundCustom {
msg: format!(
"{msg}. Supported values: csv, tsv, parquet, ipc, arrow, json, jsonl, avro"
),
span: blamed,
}),
},
None => Err(ShellError::FileNotFoundCustom {
msg: "File without extension".into(),
span: file.span,
}),
}
.map(|value| PipelineData::Value(value, None))
}
fn from_parquet(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
) -> Result<Value, ShellError> {
if call.has_flag(engine_state, stack, "lazy")? {
let file: String = call.req(engine_state, stack, 0)?;
let args = ScanArgsParquet {
n_rows: None,
cache: true,
parallel: ParallelStrategy::Auto,
rechunk: false,
row_index: None,
low_memory: false,
cloud_options: None,
use_statistics: false,
hive_options: HiveOptions::default(),
};
let df: NuLazyFrame = LazyFrame::scan_parquet(file, args)
.map_err(|e| ShellError::GenericError {
error: "Parquet reader error".into(),
msg: format!("{e:?}"),
span: Some(call.head),
help: None,
inner: vec![],
})?
.into();
df.into_value(call.head)
} else {
let file: Spanned<PathBuf> = call.req(engine_state, stack, 0)?;
let columns: Option<Vec<String>> = call.get_flag(engine_state, stack, "columns")?;
let r = File::open(&file.item).map_err(|e| ShellError::GenericError {
error: "Error opening file".into(),
msg: e.to_string(),
span: Some(file.span),
help: None,
inner: vec![],
})?;
let reader = ParquetReader::new(r);
let reader = match columns {
None => reader,
Some(columns) => reader.with_columns(Some(columns)),
};
let df: NuDataFrame = reader
.finish()
.map_err(|e| ShellError::GenericError {
error: "Parquet reader error".into(),
msg: format!("{e:?}"),
span: Some(call.head),
help: None,
inner: vec![],
})?
.into();
Ok(df.into_value(call.head))
}
}
fn from_avro(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
) -> Result<Value, ShellError> {
let file: Spanned<PathBuf> = call.req(engine_state, stack, 0)?;
let columns: Option<Vec<String>> = call.get_flag(engine_state, stack, "columns")?;
let r = File::open(&file.item).map_err(|e| ShellError::GenericError {
error: "Error opening file".into(),
msg: e.to_string(),
span: Some(file.span),
help: None,
inner: vec![],
})?;
let reader = AvroReader::new(r);
let reader = match columns {
None => reader,
Some(columns) => reader.with_columns(Some(columns)),
};
let df: NuDataFrame = reader
.finish()
.map_err(|e| ShellError::GenericError {
error: "Avro reader error".into(),
msg: format!("{e:?}"),
span: Some(call.head),
help: None,
inner: vec![],
})?
.into();
Ok(df.into_value(call.head))
}
fn from_ipc(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
) -> Result<Value, ShellError> {
if call.has_flag(engine_state, stack, "lazy")? {
let file: String = call.req(engine_state, stack, 0)?;
let args = ScanArgsIpc {
n_rows: None,
cache: true,
rechunk: false,
row_index: None,
memory_map: true,
cloud_options: None,
};
let df: NuLazyFrame = LazyFrame::scan_ipc(file, args)
.map_err(|e| ShellError::GenericError {
error: "IPC reader error".into(),
msg: format!("{e:?}"),
span: Some(call.head),
help: None,
inner: vec![],
})?
.into();
df.into_value(call.head)
} else {
let file: Spanned<PathBuf> = call.req(engine_state, stack, 0)?;
let columns: Option<Vec<String>> = call.get_flag(engine_state, stack, "columns")?;
let r = File::open(&file.item).map_err(|e| ShellError::GenericError {
error: "Error opening file".into(),
msg: e.to_string(),
span: Some(file.span),
help: None,
inner: vec![],
})?;
let reader = IpcReader::new(r);
let reader = match columns {
None => reader,
Some(columns) => reader.with_columns(Some(columns)),
};
let df: NuDataFrame = reader
.finish()
.map_err(|e| ShellError::GenericError {
error: "IPC reader error".into(),
msg: format!("{e:?}"),
span: Some(call.head),
help: None,
inner: vec![],
})?
.into();
Ok(df.into_value(call.head))
}
}
fn from_json(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
) -> Result<Value, ShellError> {
let file: Spanned<PathBuf> = call.req(engine_state, stack, 0)?;
let file = File::open(&file.item).map_err(|e| ShellError::GenericError {
error: "Error opening file".into(),
msg: e.to_string(),
span: Some(file.span),
help: None,
inner: vec![],
})?;
let maybe_schema = call
.get_flag(engine_state, stack, "schema")?
.map(|schema| NuSchema::try_from(&schema))
.transpose()?;
let buf_reader = BufReader::new(file);
let reader = JsonReader::new(buf_reader);
let reader = match maybe_schema {
Some(schema) => reader.with_schema(schema.into()),
None => reader,
};
let df: NuDataFrame = reader
.finish()
.map_err(|e| ShellError::GenericError {
error: "Json reader error".into(),
msg: format!("{e:?}"),
span: Some(call.head),
help: None,
inner: vec![],
})?
.into();
Ok(df.into_value(call.head))
}
fn from_jsonl(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
) -> Result<Value, ShellError> {
let infer_schema: Option<usize> = call.get_flag(engine_state, stack, "infer-schema")?;
let maybe_schema = call
.get_flag(engine_state, stack, "schema")?
.map(|schema| NuSchema::try_from(&schema))
.transpose()?;
let file: Spanned<PathBuf> = call.req(engine_state, stack, 0)?;
let file = File::open(&file.item).map_err(|e| ShellError::GenericError {
error: "Error opening file".into(),
msg: e.to_string(),
span: Some(file.span),
help: None,
inner: vec![],
})?;
let buf_reader = BufReader::new(file);
let reader = JsonReader::new(buf_reader)
.with_json_format(JsonFormat::JsonLines)
.infer_schema_len(infer_schema);
let reader = match maybe_schema {
Some(schema) => reader.with_schema(schema.into()),
None => reader,
};
let df: NuDataFrame = reader
.finish()
.map_err(|e| ShellError::GenericError {
error: "Json lines reader error".into(),
msg: format!("{e:?}"),
span: Some(call.head),
help: None,
inner: vec![],
})?
.into();
Ok(df.into_value(call.head))
}
fn from_csv(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
) -> Result<Value, ShellError> {
let delimiter: Option<Spanned<String>> = call.get_flag(engine_state, stack, "delimiter")?;
let no_header: bool = call.has_flag(engine_state, stack, "no-header")?;
let infer_schema: Option<usize> = call.get_flag(engine_state, stack, "infer-schema")?;
let skip_rows: Option<usize> = call.get_flag(engine_state, stack, "skip-rows")?;
let columns: Option<Vec<String>> = call.get_flag(engine_state, stack, "columns")?;
let maybe_schema = call
.get_flag(engine_state, stack, "schema")?
.map(|schema| NuSchema::try_from(&schema))
.transpose()?;
if call.has_flag(engine_state, stack, "lazy")? {
let file: String = call.req(engine_state, stack, 0)?;
let csv_reader = LazyCsvReader::new(file);
let csv_reader = match delimiter {
None => csv_reader,
Some(d) => {
if d.item.len() != 1 {
return Err(ShellError::GenericError {
error: "Incorrect delimiter".into(),
msg: "Delimiter has to be one character".into(),
span: Some(d.span),
help: None,
inner: vec![],
});
} else {
let delimiter = match d.item.chars().next() {
Some(d) => d as u8,
None => unreachable!(),
};
csv_reader.with_separator(delimiter)
}
}
};
let csv_reader = csv_reader.has_header(!no_header);
let csv_reader = match maybe_schema {
Some(schema) => csv_reader.with_schema(Some(schema.into())),
None => csv_reader,
};
let csv_reader = match infer_schema {
None => csv_reader,
Some(r) => csv_reader.with_infer_schema_length(Some(r)),
};
let csv_reader = match skip_rows {
None => csv_reader,
Some(r) => csv_reader.with_skip_rows(r),
};
let df: NuLazyFrame = csv_reader
.finish()
.map_err(|e| ShellError::GenericError {
error: "Parquet reader error".into(),
msg: format!("{e:?}"),
span: Some(call.head),
help: None,
inner: vec![],
})?
.into();
df.into_value(call.head)
} else {
let file: Spanned<PathBuf> = call.req(engine_state, stack, 0)?;
let csv_reader = CsvReader::from_path(&file.item)
.map_err(|e| ShellError::GenericError {
error: "Error creating CSV reader".into(),
msg: e.to_string(),
span: Some(file.span),
help: None,
inner: vec![],
})?
.with_encoding(CsvEncoding::LossyUtf8);
let csv_reader = match delimiter {
None => csv_reader,
Some(d) => {
if d.item.len() != 1 {
return Err(ShellError::GenericError {
error: "Incorrect delimiter".into(),
msg: "Delimiter has to be one character".into(),
span: Some(d.span),
help: None,
inner: vec![],
});
} else {
let delimiter = match d.item.chars().next() {
Some(d) => d as u8,
None => unreachable!(),
};
csv_reader.with_separator(delimiter)
}
}
};
let csv_reader = csv_reader.has_header(!no_header);
let csv_reader = match maybe_schema {
Some(schema) => csv_reader.with_schema(Some(schema.into())),
None => csv_reader,
};
let csv_reader = match infer_schema {
None => csv_reader,
Some(r) => csv_reader.infer_schema(Some(r)),
};
let csv_reader = match skip_rows {
None => csv_reader,
Some(r) => csv_reader.with_skip_rows(r),
};
let csv_reader = match columns {
None => csv_reader,
Some(columns) => csv_reader.with_columns(Some(columns)),
};
let df: NuDataFrame = csv_reader
.finish()
.map_err(|e| ShellError::GenericError {
error: "Parquet reader error".into(),
msg: format!("{e:?}"),
span: Some(call.head),
help: None,
inner: vec![],
})?
.into();
Ok(df.into_value(call.head))
}
}

View File

@ -1,104 +0,0 @@
use crate::dataframe::{
eager::SQLContext,
values::{Column, NuDataFrame, NuLazyFrame},
};
use nu_engine::command_prelude::*;
// attribution:
// sql_context.rs, and sql_expr.rs were copied from polars-sql. thank you.
// maybe we should just use the crate at some point but it's not published yet.
// https://github.com/pola-rs/polars/tree/master/polars-sql
#[derive(Clone)]
pub struct QueryDf;
impl Command for QueryDf {
fn name(&self) -> &str {
"dfr query"
}
fn usage(&self) -> &str {
"Query dataframe using SQL. Note: The dataframe is always named 'df' in your query's from clause."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.required("sql", SyntaxShape::String, "sql query")
.input_output_type(
Type::Custom("dataframe".into()),
Type::Custom("dataframe".into()),
)
.category(Category::Custom("dataframe".into()))
}
fn search_terms(&self) -> Vec<&str> {
vec!["dataframe", "sql", "search"]
}
fn examples(&self) -> Vec<Example> {
vec![Example {
description: "Query dataframe using SQL",
example: "[[a b]; [1 2] [3 4]] | dfr into-df | dfr query 'select a from df'",
result: Some(
NuDataFrame::try_from_columns(
vec![Column::new(
"a".to_string(),
vec![Value::test_int(1), Value::test_int(3)],
)],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
}]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
command(engine_state, stack, call, input)
}
}
fn command(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let sql_query: String = call.req(engine_state, stack, 0)?;
let df = NuDataFrame::try_from_pipeline(input, call.head)?;
let mut ctx = SQLContext::new();
ctx.register("df", &df.df);
let df_sql = ctx
.execute(&sql_query)
.map_err(|e| ShellError::GenericError {
error: "Dataframe Error".into(),
msg: e.to_string(),
span: Some(call.head),
help: None,
inner: vec![],
})?;
let lazy = NuLazyFrame::new(false, df_sql);
let eager = lazy.collect(call.head)?;
let value = Value::custom(Box::new(eager), call.head);
Ok(PipelineData::Value(value, None))
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
#[test]
fn test_examples() {
test_dataframe(vec![Box::new(QueryDf {})])
}
}

View File

@ -1,186 +0,0 @@
use crate::dataframe::{
utils::extract_strings,
values::{Column, NuDataFrame, NuLazyFrame},
};
use nu_engine::command_prelude::*;
#[derive(Clone)]
pub struct RenameDF;
impl Command for RenameDF {
fn name(&self) -> &str {
"dfr rename"
}
fn usage(&self) -> &str {
"Rename a dataframe column."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.required(
"columns",
SyntaxShape::Any,
"Column(s) to be renamed. A string or list of strings",
)
.required(
"new names",
SyntaxShape::Any,
"New names for the selected column(s). A string or list of strings",
)
.input_output_type(
Type::Custom("dataframe".into()),
Type::Custom("dataframe".into()),
)
.category(Category::Custom("dataframe or lazyframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![
Example {
description: "Renames a series",
example: "[5 6 7 8] | dfr into-df | dfr rename '0' new_name",
result: Some(
NuDataFrame::try_from_columns(
vec![Column::new(
"new_name".to_string(),
vec![
Value::test_int(5),
Value::test_int(6),
Value::test_int(7),
Value::test_int(8),
],
)],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
Example {
description: "Renames a dataframe column",
example: "[[a b]; [1 2] [3 4]] | dfr into-df | dfr rename a a_new",
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"a_new".to_string(),
vec![Value::test_int(1), Value::test_int(3)],
),
Column::new(
"b".to_string(),
vec![Value::test_int(2), Value::test_int(4)],
),
],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
Example {
description: "Renames two dataframe columns",
example: "[[a b]; [1 2] [3 4]] | dfr into-df | dfr rename [a b] [a_new b_new]",
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"a_new".to_string(),
vec![Value::test_int(1), Value::test_int(3)],
),
Column::new(
"b_new".to_string(),
vec![Value::test_int(2), Value::test_int(4)],
),
],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let value = input.into_value(call.head);
if NuLazyFrame::can_downcast(&value) {
let df = NuLazyFrame::try_from_value(value)?;
command_lazy(engine_state, stack, call, df)
} else {
let df = NuDataFrame::try_from_value(value)?;
command_eager(engine_state, stack, call, df)
}
}
}
fn command_eager(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
mut df: NuDataFrame,
) -> Result<PipelineData, ShellError> {
let columns: Value = call.req(engine_state, stack, 0)?;
let columns = extract_strings(columns)?;
let new_names: Value = call.req(engine_state, stack, 1)?;
let new_names = extract_strings(new_names)?;
for (from, to) in columns.iter().zip(new_names.iter()) {
df.as_mut()
.rename(from, to)
.map_err(|e| ShellError::GenericError {
error: "Error renaming".into(),
msg: e.to_string(),
span: Some(call.head),
help: None,
inner: vec![],
})?;
}
Ok(PipelineData::Value(df.into_value(call.head), None))
}
fn command_lazy(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
lazy: NuLazyFrame,
) -> Result<PipelineData, ShellError> {
let columns: Value = call.req(engine_state, stack, 0)?;
let columns = extract_strings(columns)?;
let new_names: Value = call.req(engine_state, stack, 1)?;
let new_names = extract_strings(new_names)?;
if columns.len() != new_names.len() {
let value: Value = call.req(engine_state, stack, 1)?;
return Err(ShellError::IncompatibleParametersSingle {
msg: "New name list has different size to column list".into(),
span: value.span(),
});
}
let lazy = lazy.into_polars();
let lazy: NuLazyFrame = lazy.rename(&columns, &new_names).into();
Ok(PipelineData::Value(lazy.into_value(call.head)?, None))
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
#[test]
fn test_examples() {
test_dataframe(vec![Box::new(RenameDF {})])
}
}

View File

@ -1,127 +0,0 @@
use crate::dataframe::values::NuDataFrame;
use nu_engine::command_prelude::*;
use polars::{prelude::NamedFrom, series::Series};
#[derive(Clone)]
pub struct SampleDF;
impl Command for SampleDF {
fn name(&self) -> &str {
"dfr sample"
}
fn usage(&self) -> &str {
"Create sample dataframe."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.named(
"n-rows",
SyntaxShape::Int,
"number of rows to be taken from dataframe",
Some('n'),
)
.named(
"fraction",
SyntaxShape::Number,
"fraction of dataframe to be taken",
Some('f'),
)
.named(
"seed",
SyntaxShape::Number,
"seed for the selection",
Some('s'),
)
.switch("replace", "sample with replace", Some('e'))
.switch("shuffle", "shuffle sample", Some('u'))
.input_output_type(
Type::Custom("dataframe".into()),
Type::Custom("dataframe".into()),
)
.category(Category::Custom("dataframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![
Example {
description: "Sample rows from dataframe",
example: "[[a b]; [1 2] [3 4]] | dfr into-df | dfr sample --n-rows 1",
result: None, // No expected value because sampling is random
},
Example {
description: "Shows sample row using fraction and replace",
example:
"[[a b]; [1 2] [3 4] [5 6]] | dfr into-df | dfr sample --fraction 0.5 --replace",
result: None, // No expected value because sampling is random
},
]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
command(engine_state, stack, call, input)
}
}
fn command(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let rows: Option<Spanned<i64>> = call.get_flag(engine_state, stack, "n-rows")?;
let fraction: Option<Spanned<f64>> = call.get_flag(engine_state, stack, "fraction")?;
let seed: Option<u64> = call
.get_flag::<i64>(engine_state, stack, "seed")?
.map(|val| val as u64);
let replace: bool = call.has_flag(engine_state, stack, "replace")?;
let shuffle: bool = call.has_flag(engine_state, stack, "shuffle")?;
let df = NuDataFrame::try_from_pipeline(input, call.head)?;
match (rows, fraction) {
(Some(rows), None) => df
.as_ref()
.sample_n(&Series::new("s", &[rows.item]), replace, shuffle, seed)
.map_err(|e| ShellError::GenericError {
error: "Error creating sample".into(),
msg: e.to_string(),
span: Some(rows.span),
help: None,
inner: vec![],
}),
(None, Some(frac)) => df
.as_ref()
.sample_frac(&Series::new("frac", &[frac.item]), replace, shuffle, seed)
.map_err(|e| ShellError::GenericError {
error: "Error creating sample".into(),
msg: e.to_string(),
span: Some(frac.span),
help: None,
inner: vec![],
}),
(Some(_), Some(_)) => Err(ShellError::GenericError {
error: "Incompatible flags".into(),
msg: "Only one selection criterion allowed".into(),
span: Some(call.head),
help: None,
inner: vec![],
}),
(None, None) => Err(ShellError::GenericError {
error: "No selection".into(),
msg: "No selection criterion was found".into(),
span: Some(call.head),
help: Some("Perhaps you want to use the flag -n or -f".into()),
inner: vec![],
}),
}
.map(|df| PipelineData::Value(NuDataFrame::dataframe_into_value(df, call.head), None))
}

View File

@ -1,112 +0,0 @@
use crate::dataframe::values::NuDataFrame;
use nu_engine::command_prelude::*;
#[derive(Clone)]
pub struct SchemaDF;
impl Command for SchemaDF {
fn name(&self) -> &str {
"dfr schema"
}
fn usage(&self) -> &str {
"Show schema for a dataframe."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.switch("datatype-list", "creates a lazy dataframe", Some('l'))
.input_output_type(
Type::Custom("dataframe".into()),
Type::Custom("dataframe".into()),
)
.category(Category::Custom("dataframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![Example {
description: "Dataframe schema",
example: r#"[[a b]; [1 "foo"] [3 "bar"]] | dfr into-df | dfr schema"#,
result: Some(Value::record(
record! {
"a" => Value::string("i64", Span::test_data()),
"b" => Value::string("str", Span::test_data()),
},
Span::test_data(),
)),
}]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
if call.has_flag(engine_state, stack, "datatype-list")? {
Ok(PipelineData::Value(datatype_list(Span::unknown()), None))
} else {
command(engine_state, stack, call, input)
}
}
}
fn command(
_engine_state: &EngineState,
_stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let df = NuDataFrame::try_from_pipeline(input, call.head)?;
let schema = df.schema();
let value: Value = schema.into();
Ok(PipelineData::Value(value, None))
}
fn datatype_list(span: Span) -> Value {
let types: Vec<Value> = [
("null", ""),
("bool", ""),
("u8", ""),
("u16", ""),
("u32", ""),
("u64", ""),
("i8", ""),
("i16", ""),
("i32", ""),
("i64", ""),
("f32", ""),
("f64", ""),
("str", ""),
("binary", ""),
("date", ""),
("datetime<time_unit: (ms, us, ns) timezone (optional)>", "Time Unit can be: milliseconds: ms, microseconds: us, nanoseconds: ns. Timezone wildcard is *. Other Timezone examples: UTC, America/Los_Angeles."),
("duration<time_unit: (ms, us, ns)>", "Time Unit can be: milliseconds: ms, microseconds: us, nanoseconds: ns."),
("time", ""),
("object", ""),
("unknown", ""),
("list<dtype>", ""),
]
.iter()
.map(|(dtype, note)| {
Value::record(record! {
"dtype" => Value::string(*dtype, span),
"note" => Value::string(*note, span),
},
span)
})
.collect();
Value::list(types, span)
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
#[test]
fn test_examples() {
test_dataframe(vec![Box::new(SchemaDF {})])
}
}

View File

@ -1,82 +0,0 @@
use crate::dataframe::values::{Column, NuDataFrame};
use nu_engine::command_prelude::*;
#[derive(Clone)]
pub struct ShapeDF;
impl Command for ShapeDF {
fn name(&self) -> &str {
"dfr shape"
}
fn usage(&self) -> &str {
"Shows column and row size for a dataframe."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.input_output_type(
Type::Custom("dataframe".into()),
Type::Custom("dataframe".into()),
)
.category(Category::Custom("dataframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![Example {
description: "Shows row and column shape",
example: "[[a b]; [1 2] [3 4]] | dfr into-df | dfr shape",
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new("rows".to_string(), vec![Value::test_int(2)]),
Column::new("columns".to_string(), vec![Value::test_int(2)]),
],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
}]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
command(engine_state, stack, call, input)
}
}
fn command(
_engine_state: &EngineState,
_stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let df = NuDataFrame::try_from_pipeline(input, call.head)?;
let rows = Value::int(df.as_ref().height() as i64, call.head);
let cols = Value::int(df.as_ref().width() as i64, call.head);
let rows_col = Column::new("rows".to_string(), vec![rows]);
let cols_col = Column::new("columns".to_string(), vec![cols]);
NuDataFrame::try_from_columns(vec![rows_col, cols_col], None)
.map(|df| PipelineData::Value(df.into_value(call.head), None))
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
#[test]
fn test_examples() {
test_dataframe(vec![Box::new(ShapeDF {})])
}
}

View File

@ -1,84 +0,0 @@
use crate::dataframe::values::{Column, NuDataFrame};
use nu_engine::command_prelude::*;
#[derive(Clone)]
pub struct SliceDF;
impl Command for SliceDF {
fn name(&self) -> &str {
"dfr slice"
}
fn usage(&self) -> &str {
"Creates new dataframe from a slice of rows."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.required("offset", SyntaxShape::Int, "start of slice")
.required("size", SyntaxShape::Int, "size of slice")
.input_output_type(
Type::Custom("dataframe".into()),
Type::Custom("dataframe".into()),
)
.category(Category::Custom("dataframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![Example {
description: "Create new dataframe from a slice of the rows",
example: "[[a b]; [1 2] [3 4]] | dfr into-df | dfr slice 0 1",
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new("a".to_string(), vec![Value::test_int(1)]),
Column::new("b".to_string(), vec![Value::test_int(2)]),
],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
}]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
command(engine_state, stack, call, input)
}
}
fn command(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let offset: i64 = call.req(engine_state, stack, 0)?;
let size: usize = call.req(engine_state, stack, 1)?;
let df = NuDataFrame::try_from_pipeline(input, call.head)?;
let res = df.as_ref().slice(offset, size);
Ok(PipelineData::Value(
NuDataFrame::dataframe_into_value(res, call.head),
None,
))
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
#[test]
fn test_examples() {
test_dataframe(vec![Box::new(SliceDF {})])
}
}

View File

@ -1,228 +0,0 @@
use crate::dataframe::eager::sql_expr::parse_sql_expr;
use polars::error::{ErrString, PolarsError};
use polars::prelude::{col, DataFrame, DataType, IntoLazy, LazyFrame};
use sqlparser::ast::{
Expr as SqlExpr, GroupByExpr, Select, SelectItem, SetExpr, Statement, TableFactor,
Value as SQLValue,
};
use sqlparser::dialect::GenericDialect;
use sqlparser::parser::Parser;
use std::collections::HashMap;
#[derive(Default)]
pub struct SQLContext {
table_map: HashMap<String, LazyFrame>,
dialect: GenericDialect,
}
impl SQLContext {
pub fn new() -> Self {
Self {
table_map: HashMap::new(),
dialect: GenericDialect,
}
}
pub fn register(&mut self, name: &str, df: &DataFrame) {
self.table_map.insert(name.to_owned(), df.clone().lazy());
}
fn execute_select(&self, select_stmt: &Select) -> Result<LazyFrame, PolarsError> {
// Determine involved dataframe
// Implicit join require some more work in query parsers, Explicit join are preferred for now.
let tbl = select_stmt.from.first().ok_or_else(|| {
PolarsError::ComputeError(ErrString::from("No table found in select statement"))
})?;
let mut alias_map = HashMap::new();
let tbl_name = match &tbl.relation {
TableFactor::Table { name, alias, .. } => {
let tbl_name = name
.0
.first()
.ok_or_else(|| {
PolarsError::ComputeError(ErrString::from(
"No table found in select statement",
))
})?
.value
.to_string();
if self.table_map.contains_key(&tbl_name) {
if let Some(alias) = alias {
alias_map.insert(alias.name.value.clone(), tbl_name.to_owned());
};
tbl_name
} else {
return Err(PolarsError::ComputeError(
format!("Table name {tbl_name} was not found").into(),
));
}
}
// Support bare table, optional with alias for now
_ => return Err(PolarsError::ComputeError("Not implemented".into())),
};
let df = &self.table_map[&tbl_name];
let mut raw_projection_before_alias: HashMap<String, usize> = HashMap::new();
let mut contain_wildcard = false;
// Filter Expression
let df = match select_stmt.selection.as_ref() {
Some(expr) => {
let filter_expression = parse_sql_expr(expr)?;
df.clone().filter(filter_expression)
}
None => df.clone(),
};
// Column Projections
let projection = select_stmt
.projection
.iter()
.enumerate()
.map(|(i, select_item)| {
Ok(match select_item {
SelectItem::UnnamedExpr(expr) => {
let expr = parse_sql_expr(expr)?;
raw_projection_before_alias.insert(format!("{expr:?}"), i);
expr
}
SelectItem::ExprWithAlias { expr, alias } => {
let expr = parse_sql_expr(expr)?;
raw_projection_before_alias.insert(format!("{expr:?}"), i);
expr.alias(&alias.value)
}
SelectItem::QualifiedWildcard(_, _) | SelectItem::Wildcard(_) => {
contain_wildcard = true;
col("*")
}
})
})
.collect::<Result<Vec<_>, PolarsError>>()?;
// Check for group by
// After projection since there might be number.
let group_by = match &select_stmt.group_by {
GroupByExpr::All =>
Err(
PolarsError::ComputeError("Group-By Error: Only positive number or expression are supported, not all".into())
)?,
GroupByExpr::Expressions(expressions) => expressions
}
.iter()
.map(
|e|match e {
SqlExpr::Value(SQLValue::Number(idx, _)) => {
let idx = match idx.parse::<usize>() {
Ok(0)| Err(_) => Err(
PolarsError::ComputeError(
format!("Group-By Error: Only positive number or expression are supported, got {idx}").into()
)),
Ok(idx) => Ok(idx)
}?;
Ok(projection[idx].clone())
}
SqlExpr::Value(_) => Err(
PolarsError::ComputeError("Group-By Error: Only positive number or expression are supported".into())
),
_ => parse_sql_expr(e)
}
)
.collect::<Result<Vec<_>, PolarsError>>()?;
let df = if group_by.is_empty() {
df.select(projection)
} else {
// check groupby and projection due to difference between SQL and polars
// Return error on wild card, shouldn't process this
if contain_wildcard {
return Err(PolarsError::ComputeError(
"Group-By Error: Can't process wildcard in group-by".into(),
));
}
// Default polars group by will have group by columns at the front
// need some container to contain position of group by columns and its position
// at the final agg projection, check the schema for the existence of group by column
// and its projections columns, keeping the original index
let (exclude_expr, groupby_pos): (Vec<_>, Vec<_>) = group_by
.iter()
.map(|expr| raw_projection_before_alias.get(&format!("{expr:?}")))
.enumerate()
.filter(|(_, proj_p)| proj_p.is_some())
.map(|(gb_p, proj_p)| (*proj_p.unwrap_or(&0), (*proj_p.unwrap_or(&0), gb_p)))
.unzip();
let (agg_projection, agg_proj_pos): (Vec<_>, Vec<_>) = projection
.iter()
.enumerate()
.filter(|(i, _)| !exclude_expr.contains(i))
.enumerate()
.map(|(agg_pj, (proj_p, expr))| (expr.clone(), (proj_p, agg_pj + group_by.len())))
.unzip();
let agg_df = df.group_by(group_by).agg(agg_projection);
let mut final_proj_pos = groupby_pos
.into_iter()
.chain(agg_proj_pos)
.collect::<Vec<_>>();
final_proj_pos.sort_by(|(proj_pa, _), (proj_pb, _)| proj_pa.cmp(proj_pb));
let final_proj = final_proj_pos
.into_iter()
.map(|(_, shm_p)| {
col(agg_df
.clone()
// FIXME: had to do this mess to get get_index to work, not sure why. need help
.collect()
.unwrap_or_default()
.schema()
.get_at_index(shm_p)
.unwrap_or((&"".into(), &DataType::Null))
.0)
})
.collect::<Vec<_>>();
agg_df.select(final_proj)
};
Ok(df)
}
pub fn execute(&self, query: &str) -> Result<LazyFrame, PolarsError> {
let ast = Parser::parse_sql(&self.dialect, query)
.map_err(|e| PolarsError::ComputeError(format!("{e:?}").into()))?;
if ast.len() != 1 {
Err(PolarsError::ComputeError(
"One and only one statement at a time please".into(),
))
} else {
let ast = ast
.first()
.ok_or_else(|| PolarsError::ComputeError(ErrString::from("No statement found")))?;
Ok(match ast {
Statement::Query(query) => {
let rs = match &*query.body {
SetExpr::Select(select_stmt) => self.execute_select(select_stmt)?,
_ => {
return Err(PolarsError::ComputeError(
"INSERT, UPDATE is not supported for polars".into(),
))
}
};
match &query.limit {
Some(SqlExpr::Value(SQLValue::Number(nrow, _))) => {
let nrow = nrow.parse().map_err(|err| {
PolarsError::ComputeError(
format!("Conversion Error: {err:?}").into(),
)
})?;
rs.limit(nrow)
}
None => rs,
_ => {
return Err(PolarsError::ComputeError(
"Only support number argument to LIMIT clause".into(),
))
}
}
}
_ => {
return Err(PolarsError::ComputeError(
format!("Statement type {ast:?} is not supported").into(),
))
}
})
}
}
}

View File

@ -1,200 +0,0 @@
use polars::error::PolarsError;
use polars::prelude::{col, lit, DataType, Expr, LiteralValue, PolarsResult as Result, TimeUnit};
use sqlparser::ast::{
ArrayElemTypeDef, BinaryOperator as SQLBinaryOperator, DataType as SQLDataType,
Expr as SqlExpr, Function as SQLFunction, Value as SqlValue, WindowType,
};
fn map_sql_polars_datatype(data_type: &SQLDataType) -> Result<DataType> {
Ok(match data_type {
SQLDataType::Char(_)
| SQLDataType::Varchar(_)
| SQLDataType::Uuid
| SQLDataType::Clob(_)
| SQLDataType::Text
| SQLDataType::String(_) => DataType::String,
SQLDataType::Float(_) => DataType::Float32,
SQLDataType::Real => DataType::Float32,
SQLDataType::Double => DataType::Float64,
SQLDataType::TinyInt(_) => DataType::Int8,
SQLDataType::UnsignedTinyInt(_) => DataType::UInt8,
SQLDataType::SmallInt(_) => DataType::Int16,
SQLDataType::UnsignedSmallInt(_) => DataType::UInt16,
SQLDataType::Int(_) => DataType::Int32,
SQLDataType::UnsignedInt(_) => DataType::UInt32,
SQLDataType::BigInt(_) => DataType::Int64,
SQLDataType::UnsignedBigInt(_) => DataType::UInt64,
SQLDataType::Boolean => DataType::Boolean,
SQLDataType::Date => DataType::Date,
SQLDataType::Time(_, _) => DataType::Time,
SQLDataType::Timestamp(_, _) => DataType::Datetime(TimeUnit::Microseconds, None),
SQLDataType::Interval => DataType::Duration(TimeUnit::Microseconds),
SQLDataType::Array(array_type_def) => match array_type_def {
ArrayElemTypeDef::AngleBracket(inner_type)
| ArrayElemTypeDef::SquareBracket(inner_type) => {
DataType::List(Box::new(map_sql_polars_datatype(inner_type)?))
}
_ => {
return Err(PolarsError::ComputeError(
"SQL Datatype Array(None) was not supported in polars-sql yet!".into(),
))
}
},
_ => {
return Err(PolarsError::ComputeError(
format!("SQL Datatype {data_type:?} was not supported in polars-sql yet!").into(),
))
}
})
}
fn cast_(expr: Expr, data_type: &SQLDataType) -> Result<Expr> {
let polars_type = map_sql_polars_datatype(data_type)?;
Ok(expr.cast(polars_type))
}
fn binary_op_(left: Expr, right: Expr, op: &SQLBinaryOperator) -> Result<Expr> {
Ok(match op {
SQLBinaryOperator::Plus => left + right,
SQLBinaryOperator::Minus => left - right,
SQLBinaryOperator::Multiply => left * right,
SQLBinaryOperator::Divide => left / right,
SQLBinaryOperator::Modulo => left % right,
SQLBinaryOperator::StringConcat => {
left.cast(DataType::String) + right.cast(DataType::String)
}
SQLBinaryOperator::Gt => left.gt(right),
SQLBinaryOperator::Lt => left.lt(right),
SQLBinaryOperator::GtEq => left.gt_eq(right),
SQLBinaryOperator::LtEq => left.lt_eq(right),
SQLBinaryOperator::Eq => left.eq(right),
SQLBinaryOperator::NotEq => left.eq(right).not(),
SQLBinaryOperator::And => left.and(right),
SQLBinaryOperator::Or => left.or(right),
SQLBinaryOperator::Xor => left.xor(right),
_ => {
return Err(PolarsError::ComputeError(
format!("SQL Operator {op:?} was not supported in polars-sql yet!").into(),
))
}
})
}
fn literal_expr(value: &SqlValue) -> Result<Expr> {
Ok(match value {
SqlValue::Number(s, _) => {
// Check for existence of decimal separator dot
if s.contains('.') {
s.parse::<f64>().map(lit).map_err(|_| {
PolarsError::ComputeError(format!("Can't parse literal {s:?}").into())
})
} else {
s.parse::<i64>().map(lit).map_err(|_| {
PolarsError::ComputeError(format!("Can't parse literal {s:?}").into())
})
}?
}
SqlValue::SingleQuotedString(s) => lit(s.clone()),
SqlValue::NationalStringLiteral(s) => lit(s.clone()),
SqlValue::HexStringLiteral(s) => lit(s.clone()),
SqlValue::DoubleQuotedString(s) => lit(s.clone()),
SqlValue::Boolean(b) => lit(*b),
SqlValue::Null => Expr::Literal(LiteralValue::Null),
_ => {
return Err(PolarsError::ComputeError(
format!("Parsing SQL Value {value:?} was not supported in polars-sql yet!").into(),
))
}
})
}
pub fn parse_sql_expr(expr: &SqlExpr) -> Result<Expr> {
Ok(match expr {
SqlExpr::Identifier(e) => col(&e.value),
SqlExpr::BinaryOp { left, op, right } => {
let left = parse_sql_expr(left)?;
let right = parse_sql_expr(right)?;
binary_op_(left, right, op)?
}
SqlExpr::Function(sql_function) => parse_sql_function(sql_function)?,
SqlExpr::Cast {
expr,
data_type,
format: _,
} => cast_(parse_sql_expr(expr)?, data_type)?,
SqlExpr::Nested(expr) => parse_sql_expr(expr)?,
SqlExpr::Value(value) => literal_expr(value)?,
_ => {
return Err(PolarsError::ComputeError(
format!("Expression: {expr:?} was not supported in polars-sql yet!").into(),
))
}
})
}
fn apply_window_spec(expr: Expr, window_type: Option<&WindowType>) -> Result<Expr> {
Ok(match &window_type {
Some(wtype) => match wtype {
WindowType::WindowSpec(window_spec) => {
// Process for simple window specification, partition by first
let partition_by = window_spec
.partition_by
.iter()
.map(parse_sql_expr)
.collect::<Result<Vec<_>>>()?;
expr.over(partition_by)
// Order by and Row range may not be supported at the moment
}
// TODO: make NamedWindow work
WindowType::NamedWindow(_named) => {
return Err(PolarsError::ComputeError(
format!("Expression: {expr:?} was not supported in polars-sql yet!").into(),
))
}
},
None => expr,
})
}
fn parse_sql_function(sql_function: &SQLFunction) -> Result<Expr> {
use sqlparser::ast::{FunctionArg, FunctionArgExpr};
// Function name mostly do not have name space, so it mostly take the first args
let function_name = sql_function.name.0[0].value.to_ascii_lowercase();
let args = sql_function
.args
.iter()
.map(|arg| match arg {
FunctionArg::Named { arg, .. } => arg,
FunctionArg::Unnamed(arg) => arg,
})
.collect::<Vec<_>>();
Ok(
match (
function_name.as_str(),
args.as_slice(),
sql_function.distinct,
) {
("sum", [FunctionArgExpr::Expr(expr)], false) => {
apply_window_spec(parse_sql_expr(expr)?, sql_function.over.as_ref())?.sum()
}
("count", [FunctionArgExpr::Expr(expr)], false) => {
apply_window_spec(parse_sql_expr(expr)?, sql_function.over.as_ref())?.count()
}
("count", [FunctionArgExpr::Expr(expr)], true) => {
apply_window_spec(parse_sql_expr(expr)?, sql_function.over.as_ref())?.n_unique()
}
// Special case for wildcard args to count function.
("count", [FunctionArgExpr::Wildcard], false) => lit(1i32).count(),
_ => {
return Err(PolarsError::ComputeError(
format!(
"Function {function_name:?} with args {args:?} was not supported in polars-sql yet!"
)
.into(),
))
}
},
)
}

View File

@ -1,279 +0,0 @@
use crate::dataframe::values::{Column, NuDataFrame};
use nu_engine::command_prelude::*;
use polars::{
chunked_array::ChunkedArray,
prelude::{
AnyValue, DataFrame, DataType, Float64Type, IntoSeries, NewChunkedArray,
QuantileInterpolOptions, Series, StringType,
},
};
#[derive(Clone)]
pub struct Summary;
impl Command for Summary {
fn name(&self) -> &str {
"dfr summary"
}
fn usage(&self) -> &str {
"For a dataframe, produces descriptive statistics (summary statistics) for its numeric columns."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.category(Category::Custom("dataframe".into()))
.input_output_type(
Type::Custom("dataframe".into()),
Type::Custom("dataframe".into()),
)
.named(
"quantiles",
SyntaxShape::Table(vec![]),
"provide optional quantiles",
Some('q'),
)
}
fn examples(&self) -> Vec<Example> {
vec![Example {
description: "list dataframe descriptives",
example: "[[a b]; [1 1] [1 1]] | dfr into-df | dfr summary",
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"descriptor".to_string(),
vec![
Value::test_string("count"),
Value::test_string("sum"),
Value::test_string("mean"),
Value::test_string("median"),
Value::test_string("std"),
Value::test_string("min"),
Value::test_string("25%"),
Value::test_string("50%"),
Value::test_string("75%"),
Value::test_string("max"),
],
),
Column::new(
"a (i64)".to_string(),
vec![
Value::test_float(2.0),
Value::test_float(2.0),
Value::test_float(1.0),
Value::test_float(1.0),
Value::test_float(0.0),
Value::test_float(1.0),
Value::test_float(1.0),
Value::test_float(1.0),
Value::test_float(1.0),
Value::test_float(1.0),
],
),
Column::new(
"b (i64)".to_string(),
vec![
Value::test_float(2.0),
Value::test_float(2.0),
Value::test_float(1.0),
Value::test_float(1.0),
Value::test_float(0.0),
Value::test_float(1.0),
Value::test_float(1.0),
Value::test_float(1.0),
Value::test_float(1.0),
Value::test_float(1.0),
],
),
],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
}]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
command(engine_state, stack, call, input)
}
}
fn command(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let quantiles: Option<Vec<Value>> = call.get_flag(engine_state, stack, "quantiles")?;
let quantiles = quantiles.map(|values| {
values
.iter()
.map(|value| {
let span = value.span();
match value {
Value::Float { val, .. } => {
if (&0.0..=&1.0).contains(&val) {
Ok(*val)
} else {
Err(ShellError::GenericError {
error: "Incorrect value for quantile".into(),
msg: "value should be between 0 and 1".into(),
span: Some(span),
help: None,
inner: vec![],
})
}
}
Value::Error { error, .. } => Err(*error.clone()),
_ => Err(ShellError::GenericError {
error: "Incorrect value for quantile".into(),
msg: "value should be a float".into(),
span: Some(span),
help: None,
inner: vec![],
}),
}
})
.collect::<Result<Vec<f64>, ShellError>>()
});
let quantiles = match quantiles {
Some(quantiles) => quantiles?,
None => vec![0.25, 0.50, 0.75],
};
let mut quantiles_labels = quantiles
.iter()
.map(|q| Some(format!("{}%", q * 100.0)))
.collect::<Vec<Option<String>>>();
let mut labels = vec![
Some("count".to_string()),
Some("sum".to_string()),
Some("mean".to_string()),
Some("median".to_string()),
Some("std".to_string()),
Some("min".to_string()),
];
labels.append(&mut quantiles_labels);
labels.push(Some("max".to_string()));
let df = NuDataFrame::try_from_pipeline(input, call.head)?;
let names = ChunkedArray::<StringType>::from_slice_options("descriptor", &labels).into_series();
let head = std::iter::once(names);
let tail = df
.as_ref()
.get_columns()
.iter()
.filter(|col| !matches!(col.dtype(), &DataType::Object("object", _)))
.map(|col| {
let count = col.len() as f64;
let sum = col.sum_as_series().ok().and_then(|series| {
series
.cast(&DataType::Float64)
.ok()
.and_then(|ca| match ca.get(0) {
Ok(AnyValue::Float64(v)) => Some(v),
_ => None,
})
});
let mean = match col.mean_as_series().get(0) {
Ok(AnyValue::Float64(v)) => Some(v),
_ => None,
};
let median = match col.median_as_series() {
Ok(v) => match v.get(0) {
Ok(AnyValue::Float64(v)) => Some(v),
_ => None,
},
_ => None,
};
let std = match col.std_as_series(0) {
Ok(v) => match v.get(0) {
Ok(AnyValue::Float64(v)) => Some(v),
_ => None,
},
_ => None,
};
let min = col.min_as_series().ok().and_then(|series| {
series
.cast(&DataType::Float64)
.ok()
.and_then(|ca| match ca.get(0) {
Ok(AnyValue::Float64(v)) => Some(v),
_ => None,
})
});
let mut quantiles = quantiles
.clone()
.into_iter()
.map(|q| {
col.quantile_as_series(q, QuantileInterpolOptions::default())
.ok()
.and_then(|ca| ca.cast(&DataType::Float64).ok())
.and_then(|ca| match ca.get(0) {
Ok(AnyValue::Float64(v)) => Some(v),
_ => None,
})
})
.collect::<Vec<Option<f64>>>();
let max = col.max_as_series().ok().and_then(|series| {
series
.cast(&DataType::Float64)
.ok()
.and_then(|ca| match ca.get(0) {
Ok(AnyValue::Float64(v)) => Some(v),
_ => None,
})
});
let mut descriptors = vec![Some(count), sum, mean, median, std, min];
descriptors.append(&mut quantiles);
descriptors.push(max);
let name = format!("{} ({})", col.name(), col.dtype());
ChunkedArray::<Float64Type>::from_slice_options(&name, &descriptors).into_series()
});
let res = head.chain(tail).collect::<Vec<Series>>();
DataFrame::new(res)
.map_err(|e| ShellError::GenericError {
error: "Dataframe Error".into(),
msg: e.to_string(),
span: Some(call.head),
help: None,
inner: vec![],
})
.map(|df| PipelineData::Value(NuDataFrame::dataframe_into_value(df, call.head), None))
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
#[test]
fn test_examples() {
test_dataframe(vec![Box::new(Summary {})])
}
}

View File

@ -1,148 +0,0 @@
use crate::dataframe::values::{Column, NuDataFrame};
use nu_engine::command_prelude::*;
use polars::prelude::DataType;
#[derive(Clone)]
pub struct TakeDF;
impl Command for TakeDF {
fn name(&self) -> &str {
"dfr take"
}
fn usage(&self) -> &str {
"Creates new dataframe using the given indices."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.required(
"indices",
SyntaxShape::Any,
"list of indices used to take data",
)
.input_output_type(
Type::Custom("dataframe".into()),
Type::Custom("dataframe".into()),
)
.category(Category::Custom("dataframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![
Example {
description: "Takes selected rows from dataframe",
example: r#"let df = ([[a b]; [4 1] [5 2] [4 3]] | dfr into-df);
let indices = ([0 2] | dfr into-df);
$df | dfr take $indices"#,
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"a".to_string(),
vec![Value::test_int(4), Value::test_int(4)],
),
Column::new(
"b".to_string(),
vec![Value::test_int(1), Value::test_int(3)],
),
],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
Example {
description: "Takes selected rows from series",
example: r#"let series = ([4 1 5 2 4 3] | dfr into-df);
let indices = ([0 2] | dfr into-df);
$series | dfr take $indices"#,
result: Some(
NuDataFrame::try_from_columns(
vec![Column::new(
"0".to_string(),
vec![Value::test_int(4), Value::test_int(5)],
)],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
command(engine_state, stack, call, input)
}
}
fn command(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let index_value: Value = call.req(engine_state, stack, 0)?;
let index_span = index_value.span();
let index = NuDataFrame::try_from_value(index_value)?.as_series(index_span)?;
let casted = match index.dtype() {
DataType::UInt32 | DataType::UInt64 | DataType::Int32 | DataType::Int64 => index
.cast(&DataType::UInt32)
.map_err(|e| ShellError::GenericError {
error: "Error casting index list".into(),
msg: e.to_string(),
span: Some(index_span),
help: None,
inner: vec![],
}),
_ => Err(ShellError::GenericError {
error: "Incorrect type".into(),
msg: "Series with incorrect type".into(),
span: Some(call.head),
help: Some("Consider using a Series with type int type".into()),
inner: vec![],
}),
}?;
let indices = casted.u32().map_err(|e| ShellError::GenericError {
error: "Error casting index list".into(),
msg: e.to_string(),
span: Some(index_span),
help: None,
inner: vec![],
})?;
NuDataFrame::try_from_pipeline(input, call.head).and_then(|df| {
df.as_ref()
.take(indices)
.map_err(|e| ShellError::GenericError {
error: "Error taking values".into(),
msg: e.to_string(),
span: Some(call.head),
help: None,
inner: vec![],
})
.map(|df| PipelineData::Value(NuDataFrame::dataframe_into_value(df, call.head), None))
})
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
#[test]
fn test_examples() {
test_dataframe(vec![Box::new(TakeDF {})])
}
}

View File

@ -1,79 +0,0 @@
use crate::dataframe::values::NuDataFrame;
use nu_engine::command_prelude::*;
use polars::prelude::{IpcWriter, SerWriter};
use std::{fs::File, path::PathBuf};
#[derive(Clone)]
pub struct ToArrow;
impl Command for ToArrow {
fn name(&self) -> &str {
"dfr to-arrow"
}
fn usage(&self) -> &str {
"Saves dataframe to arrow file."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.required("file", SyntaxShape::Filepath, "file path to save dataframe")
.input_output_type(Type::Custom("dataframe".into()), Type::Any)
.category(Category::Custom("dataframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![Example {
description: "Saves dataframe to arrow file",
example: "[[a b]; [1 2] [3 4]] | dfr into-df | dfr to-arrow test.arrow",
result: None,
}]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
command(engine_state, stack, call, input)
}
}
fn command(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let file_name: Spanned<PathBuf> = call.req(engine_state, stack, 0)?;
let mut df = NuDataFrame::try_from_pipeline(input, call.head)?;
let mut file = File::create(&file_name.item).map_err(|e| ShellError::GenericError {
error: "Error with file name".into(),
msg: e.to_string(),
span: Some(file_name.span),
help: None,
inner: vec![],
})?;
IpcWriter::new(&mut file)
.finish(df.as_mut())
.map_err(|e| ShellError::GenericError {
error: "Error saving file".into(),
msg: e.to_string(),
span: Some(file_name.span),
help: None,
inner: vec![],
})?;
let file_value = Value::string(format!("saved {:?}", &file_name.item), file_name.span);
Ok(PipelineData::Value(
Value::list(vec![file_value], call.head),
None,
))
}

View File

@ -1,109 +0,0 @@
use crate::dataframe::values::NuDataFrame;
use nu_engine::command_prelude::*;
use polars_io::{
avro::{AvroCompression, AvroWriter},
SerWriter,
};
use std::{fs::File, path::PathBuf};
#[derive(Clone)]
pub struct ToAvro;
impl Command for ToAvro {
fn name(&self) -> &str {
"dfr to-avro"
}
fn usage(&self) -> &str {
"Saves dataframe to avro file."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.named(
"compression",
SyntaxShape::String,
"use compression, supports deflate or snappy",
Some('c'),
)
.required("file", SyntaxShape::Filepath, "file path to save dataframe")
.input_output_type(Type::Custom("dataframe".into()), Type::Any)
.category(Category::Custom("dataframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![Example {
description: "Saves dataframe to avro file",
example: "[[a b]; [1 2] [3 4]] | dfr into-df | dfr to-avro test.avro",
result: None,
}]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
command(engine_state, stack, call, input)
}
}
fn get_compression(call: &Call) -> Result<Option<AvroCompression>, ShellError> {
if let Some((compression, span)) = call
.get_flag_expr("compression")
.and_then(|e| e.as_string().map(|s| (s, e.span)))
{
match compression.as_ref() {
"snappy" => Ok(Some(AvroCompression::Snappy)),
"deflate" => Ok(Some(AvroCompression::Deflate)),
_ => Err(ShellError::IncorrectValue {
msg: "compression must be one of deflate or snappy".to_string(),
val_span: span,
call_span: span,
}),
}
} else {
Ok(None)
}
}
fn command(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let file_name: Spanned<PathBuf> = call.req(engine_state, stack, 0)?;
let compression = get_compression(call)?;
let mut df = NuDataFrame::try_from_pipeline(input, call.head)?;
let file = File::create(&file_name.item).map_err(|e| ShellError::GenericError {
error: "Error with file name".into(),
msg: e.to_string(),
span: Some(file_name.span),
help: None,
inner: vec![],
})?;
AvroWriter::new(file)
.with_compression(compression)
.finish(df.as_mut())
.map_err(|e| ShellError::GenericError {
error: "Error saving file".into(),
msg: e.to_string(),
span: Some(file_name.span),
help: None,
inner: vec![],
})?;
let file_value = Value::string(format!("saved {:?}", &file_name.item), file_name.span);
Ok(PipelineData::Value(
Value::list(vec![file_value], call.head),
None,
))
}

View File

@ -1,125 +0,0 @@
use crate::dataframe::values::NuDataFrame;
use nu_engine::command_prelude::*;
use polars::prelude::{CsvWriter, SerWriter};
use std::{fs::File, path::PathBuf};
#[derive(Clone)]
pub struct ToCSV;
impl Command for ToCSV {
fn name(&self) -> &str {
"dfr to-csv"
}
fn usage(&self) -> &str {
"Saves dataframe to CSV file."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.required("file", SyntaxShape::Filepath, "file path to save dataframe")
.named(
"delimiter",
SyntaxShape::String,
"file delimiter character",
Some('d'),
)
.switch("no-header", "Indicates if file doesn't have header", None)
.input_output_type(Type::Custom("dataframe".into()), Type::Any)
.category(Category::Custom("dataframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![
Example {
description: "Saves dataframe to CSV file",
example: "[[a b]; [1 2] [3 4]] | dfr into-df | dfr to-csv test.csv",
result: None,
},
Example {
description: "Saves dataframe to CSV file using other delimiter",
example: "[[a b]; [1 2] [3 4]] | dfr into-df | dfr to-csv test.csv --delimiter '|'",
result: None,
},
]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
command(engine_state, stack, call, input)
}
}
fn command(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let file_name: Spanned<PathBuf> = call.req(engine_state, stack, 0)?;
let delimiter: Option<Spanned<String>> = call.get_flag(engine_state, stack, "delimiter")?;
let no_header: bool = call.has_flag(engine_state, stack, "no-header")?;
let mut df = NuDataFrame::try_from_pipeline(input, call.head)?;
let mut file = File::create(&file_name.item).map_err(|e| ShellError::GenericError {
error: "Error with file name".into(),
msg: e.to_string(),
span: Some(file_name.span),
help: None,
inner: vec![],
})?;
let writer = CsvWriter::new(&mut file);
let writer = if no_header {
writer.include_header(false)
} else {
writer.include_header(true)
};
let mut writer = match delimiter {
None => writer,
Some(d) => {
if d.item.len() != 1 {
return Err(ShellError::GenericError {
error: "Incorrect delimiter".into(),
msg: "Delimiter has to be one char".into(),
span: Some(d.span),
help: None,
inner: vec![],
});
} else {
let delimiter = match d.item.chars().next() {
Some(d) => d as u8,
None => unreachable!(),
};
writer.with_separator(delimiter)
}
}
};
writer
.finish(df.as_mut())
.map_err(|e| ShellError::GenericError {
error: "Error writing to file".into(),
msg: e.to_string(),
span: Some(file_name.span),
help: None,
inner: vec![],
})?;
let file_value = Value::string(format!("saved {:?}", &file_name.item), file_name.span);
Ok(PipelineData::Value(
Value::list(vec![file_value], call.head),
None,
))
}

View File

@ -1,189 +0,0 @@
use crate::dataframe::values::{Column, NuDataFrame, NuSchema};
use nu_engine::command_prelude::*;
use polars::prelude::*;
#[derive(Clone)]
pub struct ToDataFrame;
impl Command for ToDataFrame {
fn name(&self) -> &str {
"dfr into-df"
}
fn usage(&self) -> &str {
"Converts a list, table or record into a dataframe."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.named(
"schema",
SyntaxShape::Record(vec![]),
r#"Polars Schema in format [{name: str}]. CSV, JSON, and JSONL files"#,
Some('s'),
)
.input_output_type(Type::Any, Type::Custom("dataframe".into()))
.category(Category::Custom("dataframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![
Example {
description: "Takes a dictionary and creates a dataframe",
example: "[[a b];[1 2] [3 4]] | dfr into-df",
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"a".to_string(),
vec![Value::test_int(1), Value::test_int(3)],
),
Column::new(
"b".to_string(),
vec![Value::test_int(2), Value::test_int(4)],
),
],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
Example {
description: "Takes a list of tables and creates a dataframe",
example: "[[1 2 a] [3 4 b] [5 6 c]] | dfr into-df",
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"0".to_string(),
vec![Value::test_int(1), Value::test_int(3), Value::test_int(5)],
),
Column::new(
"1".to_string(),
vec![Value::test_int(2), Value::test_int(4), Value::test_int(6)],
),
Column::new(
"2".to_string(),
vec![
Value::test_string("a"),
Value::test_string("b"),
Value::test_string("c"),
],
),
],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
Example {
description: "Takes a list and creates a dataframe",
example: "[a b c] | dfr into-df",
result: Some(
NuDataFrame::try_from_columns(
vec![Column::new(
"0".to_string(),
vec![
Value::test_string("a"),
Value::test_string("b"),
Value::test_string("c"),
],
)],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
Example {
description: "Takes a list of booleans and creates a dataframe",
example: "[true true false] | dfr into-df",
result: Some(
NuDataFrame::try_from_columns(
vec![Column::new(
"0".to_string(),
vec![
Value::test_bool(true),
Value::test_bool(true),
Value::test_bool(false),
],
)],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
Example {
description: "Convert to a dataframe and provide a schema",
example: "{a: 1, b: {a: [1 2 3]}, c: [a b c]}| dfr into-df -s {a: u8, b: {a: list<u64>}, c: list<str>}",
result: Some(
NuDataFrame::try_from_series(vec![
Series::new("a", &[1u8]),
{
let dtype = DataType::Struct(vec![Field::new("a", DataType::List(Box::new(DataType::UInt64)))]);
let vals = vec![AnyValue::StructOwned(
Box::new((vec![AnyValue::List(Series::new("a", &[1u64, 2, 3]))], vec![Field::new("a", DataType::String)]))); 1];
Series::from_any_values_and_dtype("b", &vals, &dtype, false)
.expect("Struct series should not fail")
},
{
let dtype = DataType::List(Box::new(DataType::String));
let vals = vec![AnyValue::List(Series::new("c", &["a", "b", "c"]))];
Series::from_any_values_and_dtype("c", &vals, &dtype, false)
.expect("List series should not fail")
}
], Span::test_data())
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
Example {
description: "Convert to a dataframe and provide a schema that adds a new column",
example: r#"[[a b]; [1 "foo"] [2 "bar"]] | dfr into-df -s {a: u8, b:str, c:i64} | dfr fill-null 3"#,
result: Some(NuDataFrame::try_from_series(vec![
Series::new("a", [1u8, 2]),
Series::new("b", ["foo", "bar"]),
Series::new("c", [3i64, 3]),
], Span::test_data())
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
}
]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let maybe_schema = call
.get_flag(engine_state, stack, "schema")?
.map(|schema| NuSchema::try_from(&schema))
.transpose()?;
let df = NuDataFrame::try_from_iter(input.into_iter(), maybe_schema.clone())?;
Ok(PipelineData::Value(
NuDataFrame::into_value(df, call.head),
None,
))
}
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
#[test]
fn test_examples() {
test_dataframe(vec![Box::new(ToDataFrame {})])
}
}

View File

@ -1,80 +0,0 @@
use crate::dataframe::values::NuDataFrame;
use nu_engine::command_prelude::*;
use polars::prelude::{JsonWriter, SerWriter};
use std::{fs::File, io::BufWriter, path::PathBuf};
#[derive(Clone)]
pub struct ToJsonLines;
impl Command for ToJsonLines {
fn name(&self) -> &str {
"dfr to-jsonl"
}
fn usage(&self) -> &str {
"Saves dataframe to a JSON lines file."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.required("file", SyntaxShape::Filepath, "file path to save dataframe")
.input_output_type(Type::Custom("dataframe".into()), Type::Any)
.category(Category::Custom("dataframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![Example {
description: "Saves dataframe to JSON lines file",
example: "[[a b]; [1 2] [3 4]] | dfr into-df | dfr to-jsonl test.jsonl",
result: None,
}]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
command(engine_state, stack, call, input)
}
}
fn command(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let file_name: Spanned<PathBuf> = call.req(engine_state, stack, 0)?;
let mut df = NuDataFrame::try_from_pipeline(input, call.head)?;
let file = File::create(&file_name.item).map_err(|e| ShellError::GenericError {
error: "Error with file name".into(),
msg: e.to_string(),
span: Some(file_name.span),
help: None,
inner: vec![],
})?;
let buf_writer = BufWriter::new(file);
JsonWriter::new(buf_writer)
.finish(df.as_mut())
.map_err(|e| ShellError::GenericError {
error: "Error saving file".into(),
msg: e.to_string(),
span: Some(file_name.span),
help: None,
inner: vec![],
})?;
let file_value = Value::string(format!("saved {:?}", &file_name.item), file_name.span);
Ok(PipelineData::Value(
Value::list(vec![file_value], call.head),
None,
))
}

View File

@ -1,136 +0,0 @@
use crate::dataframe::values::{NuDataFrame, NuExpression};
use nu_engine::command_prelude::*;
#[derive(Clone)]
pub struct ToNu;
impl Command for ToNu {
fn name(&self) -> &str {
"dfr into-nu"
}
fn usage(&self) -> &str {
"Converts a dataframe or an expression into into nushell value for access and exploration."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.named(
"rows",
SyntaxShape::Number,
"number of rows to be shown",
Some('n'),
)
.switch("tail", "shows tail rows", Some('t'))
.input_output_types(vec![
(Type::Custom("expression".into()), Type::Any),
(Type::Custom("dataframe".into()), Type::table()),
])
//.input_output_type(Type::Any, Type::Any)
.category(Category::Custom("dataframe".into()))
}
fn examples(&self) -> Vec<Example> {
let rec_1 = Value::test_record(record! {
"index" => Value::test_int(0),
"a" => Value::test_int(1),
"b" => Value::test_int(2),
});
let rec_2 = Value::test_record(record! {
"index" => Value::test_int(1),
"a" => Value::test_int(3),
"b" => Value::test_int(4),
});
let rec_3 = Value::test_record(record! {
"index" => Value::test_int(2),
"a" => Value::test_int(3),
"b" => Value::test_int(4),
});
vec![
Example {
description: "Shows head rows from dataframe",
example: "[[a b]; [1 2] [3 4]] | dfr into-df | dfr into-nu",
result: Some(Value::list(vec![rec_1, rec_2], Span::test_data())),
},
Example {
description: "Shows tail rows from dataframe",
example: "[[a b]; [1 2] [5 6] [3 4]] | dfr into-df | dfr into-nu --tail --rows 1",
result: Some(Value::list(vec![rec_3], Span::test_data())),
},
Example {
description: "Convert a col expression into a nushell value",
example: "dfr col a | dfr into-nu",
result: Some(Value::test_record(record! {
"expr" => Value::test_string("column"),
"value" => Value::test_string("a"),
})),
},
]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let value = input.into_value(call.head);
if NuDataFrame::can_downcast(&value) {
dataframe_command(engine_state, stack, call, value)
} else {
expression_command(call, value)
}
}
}
fn dataframe_command(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: Value,
) -> Result<PipelineData, ShellError> {
let rows: Option<usize> = call.get_flag(engine_state, stack, "rows")?;
let tail: bool = call.has_flag(engine_state, stack, "tail")?;
let df = NuDataFrame::try_from_value(input)?;
let values = if tail {
df.tail(rows, call.head)?
} else {
// if rows is specified, return those rows, otherwise return everything
if rows.is_some() {
df.head(rows, call.head)?
} else {
df.head(Some(df.height()), call.head)?
}
};
let value = Value::list(values, call.head);
Ok(PipelineData::Value(value, None))
}
fn expression_command(call: &Call, input: Value) -> Result<PipelineData, ShellError> {
let expr = NuExpression::try_from_value(input)?;
let value = expr.to_value(call.head)?;
Ok(PipelineData::Value(value, None))
}
#[cfg(test)]
mod test {
use super::super::super::expressions::ExprCol;
use super::super::super::test_dataframe::test_dataframe;
use super::*;
#[test]
fn test_examples_dataframe_input() {
test_dataframe(vec![Box::new(ToNu {})])
}
#[test]
fn test_examples_expression_input() {
test_dataframe(vec![Box::new(ToNu {}), Box::new(ExprCol {})])
}
}

View File

@ -1,79 +0,0 @@
use crate::dataframe::values::NuDataFrame;
use nu_engine::command_prelude::*;
use polars::prelude::ParquetWriter;
use std::{fs::File, path::PathBuf};
#[derive(Clone)]
pub struct ToParquet;
impl Command for ToParquet {
fn name(&self) -> &str {
"dfr to-parquet"
}
fn usage(&self) -> &str {
"Saves dataframe to parquet file."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.required("file", SyntaxShape::Filepath, "file path to save dataframe")
.input_output_type(Type::Custom("dataframe".into()), Type::Any)
.category(Category::Custom("dataframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![Example {
description: "Saves dataframe to parquet file",
example: "[[a b]; [1 2] [3 4]] | dfr into-df | dfr to-parquet test.parquet",
result: None,
}]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
command(engine_state, stack, call, input)
}
}
fn command(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let file_name: Spanned<PathBuf> = call.req(engine_state, stack, 0)?;
let mut df = NuDataFrame::try_from_pipeline(input, call.head)?;
let file = File::create(&file_name.item).map_err(|e| ShellError::GenericError {
error: "Error with file name".into(),
msg: e.to_string(),
span: Some(file_name.span),
help: None,
inner: vec![],
})?;
ParquetWriter::new(file)
.finish(df.as_mut())
.map_err(|e| ShellError::GenericError {
error: "Error saving file".into(),
msg: e.to_string(),
span: Some(file_name.span),
help: None,
inner: vec![],
})?;
let file_value = Value::string(format!("saved {:?}", &file_name.item), file_name.span);
Ok(PipelineData::Value(
Value::list(vec![file_value], call.head),
None,
))
}

View File

@ -1,203 +0,0 @@
use crate::dataframe::values::{Column, NuDataFrame, NuExpression, NuLazyFrame};
use nu_engine::command_prelude::*;
#[derive(Clone)]
pub struct WithColumn;
impl Command for WithColumn {
fn name(&self) -> &str {
"dfr with-column"
}
fn usage(&self) -> &str {
"Adds a series to the dataframe."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.named("name", SyntaxShape::String, "new column name", Some('n'))
.rest(
"series or expressions",
SyntaxShape::Any,
"series to be added or expressions used to define the new columns",
)
.input_output_type(
Type::Custom("dataframe".into()),
Type::Custom("dataframe".into()),
)
.category(Category::Custom("dataframe or lazyframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![
Example {
description: "Adds a series to the dataframe",
example: r#"[[a b]; [1 2] [3 4]]
| dfr into-df
| dfr with-column ([5 6] | dfr into-df) --name c"#,
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"a".to_string(),
vec![Value::test_int(1), Value::test_int(3)],
),
Column::new(
"b".to_string(),
vec![Value::test_int(2), Value::test_int(4)],
),
Column::new(
"c".to_string(),
vec![Value::test_int(5), Value::test_int(6)],
),
],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
Example {
description: "Adds a series to the dataframe",
example: r#"[[a b]; [1 2] [3 4]]
| dfr into-lazy
| dfr with-column [
((dfr col a) * 2 | dfr as "c")
((dfr col a) * 3 | dfr as "d")
]
| dfr collect"#,
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"a".to_string(),
vec![Value::test_int(1), Value::test_int(3)],
),
Column::new(
"b".to_string(),
vec![Value::test_int(2), Value::test_int(4)],
),
Column::new(
"c".to_string(),
vec![Value::test_int(2), Value::test_int(6)],
),
Column::new(
"d".to_string(),
vec![Value::test_int(3), Value::test_int(9)],
),
],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let value = input.into_value(call.head);
if NuLazyFrame::can_downcast(&value) {
let df = NuLazyFrame::try_from_value(value)?;
command_lazy(engine_state, stack, call, df)
} else if NuDataFrame::can_downcast(&value) {
let df = NuDataFrame::try_from_value(value)?;
command_eager(engine_state, stack, call, df)
} else {
Err(ShellError::CantConvert {
to_type: "lazy or eager dataframe".into(),
from_type: value.get_type().to_string(),
span: value.span(),
help: None,
})
}
}
}
fn command_eager(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
mut df: NuDataFrame,
) -> Result<PipelineData, ShellError> {
let new_column: Value = call.req(engine_state, stack, 0)?;
let column_span = new_column.span();
if NuExpression::can_downcast(&new_column) {
let vals: Vec<Value> = call.rest(engine_state, stack, 0)?;
let value = Value::list(vals, call.head);
let expressions = NuExpression::extract_exprs(value)?;
let lazy = NuLazyFrame::new(true, df.lazy().with_columns(&expressions));
let df = lazy.collect(call.head)?;
Ok(PipelineData::Value(df.into_value(call.head), None))
} else {
let mut other = NuDataFrame::try_from_value(new_column)?.as_series(column_span)?;
let name = match call.get_flag::<String>(engine_state, stack, "name")? {
Some(name) => name,
None => other.name().to_string(),
};
let series = other.rename(&name).clone();
df.as_mut()
.with_column(series)
.map_err(|e| ShellError::GenericError {
error: "Error adding column to dataframe".into(),
msg: e.to_string(),
span: Some(column_span),
help: None,
inner: vec![],
})
.map(|df| {
PipelineData::Value(
NuDataFrame::dataframe_into_value(df.clone(), call.head),
None,
)
})
}
}
fn command_lazy(
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
lazy: NuLazyFrame,
) -> Result<PipelineData, ShellError> {
let vals: Vec<Value> = call.rest(engine_state, stack, 0)?;
let value = Value::list(vals, call.head);
let expressions = NuExpression::extract_exprs(value)?;
let lazy: NuLazyFrame = lazy.into_polars().with_columns(&expressions).into();
Ok(PipelineData::Value(
NuLazyFrame::into_value(lazy, call.head)?,
None,
))
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
use crate::dataframe::expressions::ExprAlias;
use crate::dataframe::expressions::ExprCol;
#[test]
fn test_examples() {
test_dataframe(vec![
Box::new(WithColumn {}),
Box::new(ExprAlias {}),
Box::new(ExprCol {}),
])
}
}

View File

@ -1,86 +0,0 @@
use crate::dataframe::values::NuExpression;
use nu_engine::command_prelude::*;
#[derive(Clone)]
pub struct ExprAlias;
impl Command for ExprAlias {
fn name(&self) -> &str {
"dfr as"
}
fn usage(&self) -> &str {
"Creates an alias expression."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.required(
"Alias name",
SyntaxShape::String,
"Alias name for the expression",
)
.input_output_type(
Type::Custom("expression".into()),
Type::Custom("expression".into()),
)
.category(Category::Custom("expression".into()))
}
fn examples(&self) -> Vec<Example> {
vec![Example {
description: "Creates and alias expression",
example: "dfr col a | dfr as new_a | dfr into-nu",
result: {
let record = Value::test_record(record! {
"expr" => Value::test_record(record! {
"expr" => Value::test_string("column"),
"value" => Value::test_string("a"),
}),
"alias" => Value::test_string("new_a"),
});
Some(record)
},
}]
}
fn search_terms(&self) -> Vec<&str> {
vec!["aka", "abbr", "otherwise"]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let alias: String = call.req(engine_state, stack, 0)?;
let expr = NuExpression::try_from_pipeline(input, call.head)?;
let expr: NuExpression = expr.into_polars().alias(alias.as_str()).into();
Ok(PipelineData::Value(
NuExpression::into_value(expr, call.head),
None,
))
}
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
use crate::dataframe::eager::ToNu;
use crate::dataframe::expressions::ExprCol;
#[test]
fn test_examples() {
test_dataframe(vec![
Box::new(ExprAlias {}),
Box::new(ExprCol {}),
Box::new(ToNu {}),
])
}
}

View File

@ -1,78 +0,0 @@
use crate::dataframe::values::{Column, NuDataFrame, NuExpression};
use nu_engine::command_prelude::*;
use polars::prelude::arg_where;
#[derive(Clone)]
pub struct ExprArgWhere;
impl Command for ExprArgWhere {
fn name(&self) -> &str {
"dfr arg-where"
}
fn usage(&self) -> &str {
"Creates an expression that returns the arguments where expression is true."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.required("column name", SyntaxShape::Any, "Expression to evaluate")
.input_output_type(Type::Any, Type::Custom("expression".into()))
.category(Category::Custom("expression".into()))
}
fn examples(&self) -> Vec<Example> {
vec![Example {
description: "Return a dataframe where the value match the expression",
example: "let df = ([[a b]; [one 1] [two 2] [three 3]] | dfr into-df);
$df | dfr select (dfr arg-where ((dfr col b) >= 2) | dfr as b_arg)",
result: Some(
NuDataFrame::try_from_columns(
vec![Column::new(
"b_arg".to_string(),
vec![Value::test_int(1), Value::test_int(2)],
)],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
}]
}
fn search_terms(&self) -> Vec<&str> {
vec!["condition", "match", "if"]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
_input: PipelineData,
) -> Result<PipelineData, ShellError> {
let value: Value = call.req(engine_state, stack, 0)?;
let expr = NuExpression::try_from_value(value)?;
let expr: NuExpression = arg_where(expr.into_polars()).into();
Ok(PipelineData::Value(expr.into_value(call.head), None))
}
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
use crate::dataframe::expressions::ExprAlias;
use crate::dataframe::lazy::LazySelect;
#[test]
fn test_examples() {
test_dataframe(vec![
Box::new(ExprArgWhere {}),
Box::new(ExprAlias {}),
Box::new(LazySelect {}),
])
}
}

View File

@ -1,68 +0,0 @@
use crate::dataframe::values::NuExpression;
use nu_engine::command_prelude::*;
use polars::prelude::col;
#[derive(Clone)]
pub struct ExprCol;
impl Command for ExprCol {
fn name(&self) -> &str {
"dfr col"
}
fn usage(&self) -> &str {
"Creates a named column expression."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.required(
"column name",
SyntaxShape::String,
"Name of column to be used",
)
.input_output_type(Type::Any, Type::Custom("expression".into()))
.category(Category::Custom("expression".into()))
}
fn examples(&self) -> Vec<Example> {
vec![Example {
description: "Creates a named column expression and converts it to a nu object",
example: "dfr col a | dfr into-nu",
result: Some(Value::test_record(record! {
"expr" => Value::test_string("column"),
"value" => Value::test_string("a"),
})),
}]
}
fn search_terms(&self) -> Vec<&str> {
vec!["create"]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
_input: PipelineData,
) -> Result<PipelineData, ShellError> {
let name: String = call.req(engine_state, stack, 0)?;
let expr: NuExpression = col(name.as_str()).into();
Ok(PipelineData::Value(expr.into_value(call.head), None))
}
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
use crate::dataframe::eager::ToNu;
#[test]
fn test_examples() {
test_dataframe(vec![Box::new(ExprCol {}), Box::new(ToNu {})])
}
}

View File

@ -1,108 +0,0 @@
use crate::dataframe::values::{Column, NuDataFrame, NuExpression};
use nu_engine::command_prelude::*;
use polars::prelude::concat_str;
#[derive(Clone)]
pub struct ExprConcatStr;
impl Command for ExprConcatStr {
fn name(&self) -> &str {
"dfr concat-str"
}
fn usage(&self) -> &str {
"Creates a concat string expression."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.required(
"separator",
SyntaxShape::String,
"Separator used during the concatenation",
)
.required(
"concat expressions",
SyntaxShape::List(Box::new(SyntaxShape::Any)),
"Expression(s) that define the string concatenation",
)
.input_output_type(Type::Any, Type::Custom("expression".into()))
.category(Category::Custom("expression".into()))
}
fn examples(&self) -> Vec<Example> {
vec![Example {
description: "Creates a concat string expression",
example: r#"let df = ([[a b c]; [one two 1] [three four 2]] | dfr into-df);
$df | dfr with-column ((dfr concat-str "-" [(dfr col a) (dfr col b) ((dfr col c) * 2)]) | dfr as concat)"#,
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"a".to_string(),
vec![Value::test_string("one"), Value::test_string("three")],
),
Column::new(
"b".to_string(),
vec![Value::test_string("two"), Value::test_string("four")],
),
Column::new(
"c".to_string(),
vec![Value::test_int(1), Value::test_int(2)],
),
Column::new(
"concat".to_string(),
vec![
Value::test_string("one-two-2"),
Value::test_string("three-four-4"),
],
),
],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
}]
}
fn search_terms(&self) -> Vec<&str> {
vec!["join", "connect", "update"]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
_input: PipelineData,
) -> Result<PipelineData, ShellError> {
let separator: String = call.req(engine_state, stack, 0)?;
let value: Value = call.req(engine_state, stack, 1)?;
let expressions = NuExpression::extract_exprs(value)?;
let expr: NuExpression = concat_str(expressions, &separator, false).into();
Ok(PipelineData::Value(expr.into_value(call.head), None))
}
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
use crate::dataframe::eager::WithColumn;
use crate::dataframe::expressions::alias::ExprAlias;
use crate::dataframe::expressions::col::ExprCol;
#[test]
fn test_examples() {
test_dataframe(vec![
Box::new(ExprConcatStr {}),
Box::new(ExprAlias {}),
Box::new(ExprCol {}),
Box::new(WithColumn {}),
])
}
}

View File

@ -1,170 +0,0 @@
use crate::dataframe::values::{Column, NuDataFrame, NuExpression};
use chrono::{DateTime, FixedOffset};
use nu_engine::command_prelude::*;
use polars::{
datatypes::{DataType, TimeUnit},
prelude::NamedFrom,
series::Series,
};
#[derive(Clone)]
pub struct ExprDatePart;
impl Command for ExprDatePart {
fn name(&self) -> &str {
"dfr datepart"
}
fn usage(&self) -> &str {
"Creates an expression for capturing the specified datepart in a column."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.required(
"Datepart name",
SyntaxShape::String,
"Part of the date to capture. Possible values are year, quarter, month, week, weekday, day, hour, minute, second, millisecond, microsecond, nanosecond",
)
.input_output_type(
Type::Custom("expression".into()),
Type::Custom("expression".into()),
)
.category(Category::Custom("expression".into()))
}
fn examples(&self) -> Vec<Example> {
let dt = DateTime::<FixedOffset>::parse_from_str(
"2021-12-30T01:02:03.123456789 +0000",
"%Y-%m-%dT%H:%M:%S.%9f %z",
)
.expect("date calculation should not fail in test");
vec![
Example {
description: "Creates an expression to capture the year date part",
example: r#"[["2021-12-30T01:02:03.123456789"]] | dfr into-df | dfr as-datetime "%Y-%m-%dT%H:%M:%S.%9f" | dfr with-column [(dfr col datetime | dfr datepart year | dfr as datetime_year )]"#,
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new("datetime".to_string(), vec![Value::test_date(dt)]),
Column::new("datetime_year".to_string(), vec![Value::test_int(2021)]),
],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
Example {
description: "Creates an expression to capture multiple date parts",
example: r#"[["2021-12-30T01:02:03.123456789"]] | dfr into-df | dfr as-datetime "%Y-%m-%dT%H:%M:%S.%9f" |
dfr with-column [ (dfr col datetime | dfr datepart year | dfr as datetime_year ),
(dfr col datetime | dfr datepart month | dfr as datetime_month ),
(dfr col datetime | dfr datepart day | dfr as datetime_day ),
(dfr col datetime | dfr datepart hour | dfr as datetime_hour ),
(dfr col datetime | dfr datepart minute | dfr as datetime_minute ),
(dfr col datetime | dfr datepart second | dfr as datetime_second ),
(dfr col datetime | dfr datepart nanosecond | dfr as datetime_ns ) ]"#,
result: Some(
NuDataFrame::try_from_series(
vec![
Series::new("datetime", &[dt.timestamp_nanos_opt()])
.cast(&DataType::Datetime(TimeUnit::Nanoseconds, None))
.expect("Error casting to datetime type"),
Series::new("datetime_year", &[2021_i64]), // i32 was coerced to i64
Series::new("datetime_month", &[12_i8]),
Series::new("datetime_day", &[30_i8]),
Series::new("datetime_hour", &[1_i8]),
Series::new("datetime_minute", &[2_i8]),
Series::new("datetime_second", &[3_i8]),
Series::new("datetime_ns", &[123456789_i64]), // i32 was coerced to i64
],
Span::test_data(),
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
]
}
fn search_terms(&self) -> Vec<&str> {
vec![
"year",
"month",
"week",
"weekday",
"quarter",
"day",
"hour",
"minute",
"second",
"millisecond",
"microsecond",
"nanosecond",
]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let part: Spanned<String> = call.req(engine_state, stack, 0)?;
let expr = NuExpression::try_from_pipeline(input, call.head)?;
let expr_dt = expr.into_polars().dt();
let expr = match part.item.as_str() {
"year" => expr_dt.year(),
"quarter" => expr_dt.quarter(),
"month" => expr_dt.month(),
"week" => expr_dt.week(),
"day" => expr_dt.day(),
"hour" => expr_dt.hour(),
"minute" => expr_dt.minute(),
"second" => expr_dt.second(),
"millisecond" => expr_dt.millisecond(),
"microsecond" => expr_dt.microsecond(),
"nanosecond" => expr_dt.nanosecond(),
_ => {
return Err(ShellError::UnsupportedInput {
msg: format!("{} is not a valid datepart, expected one of year, month, day, hour, minute, second, millisecond, microsecond, nanosecond", part.item),
input: "value originates from here".to_string(),
msg_span: call.head,
input_span: part.span,
});
}
}.into();
Ok(PipelineData::Value(
NuExpression::into_value(expr, call.head),
None,
))
}
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
use crate::dataframe::eager::ToNu;
use crate::dataframe::eager::WithColumn;
use crate::dataframe::expressions::ExprAlias;
use crate::dataframe::expressions::ExprCol;
use crate::dataframe::series::AsDateTime;
#[test]
fn test_examples() {
test_dataframe(vec![
Box::new(ExprDatePart {}),
Box::new(ExprCol {}),
Box::new(ToNu {}),
Box::new(AsDateTime {}),
Box::new(WithColumn {}),
Box::new(ExprAlias {}),
])
}
}

View File

@ -1,736 +0,0 @@
/// Definition of multiple Expression commands using a macro rule
/// All of these expressions have an identical body and only require
/// to have a change in the name, description and expression function
use crate::dataframe::values::{Column, NuDataFrame, NuExpression, NuLazyFrame};
use nu_engine::command_prelude::*;
// The structs defined in this file are structs that form part of other commands
// since they share a similar name
macro_rules! expr_command {
($command: ident, $name: expr, $desc: expr, $examples: expr, $func: ident, $test: ident) => {
#[derive(Clone)]
pub struct $command;
impl Command for $command {
fn name(&self) -> &str {
$name
}
fn usage(&self) -> &str {
$desc
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.input_output_type(
Type::Custom("expression".into()),
Type::Custom("expression".into()),
)
.category(Category::Custom("expression".into()))
}
fn examples(&self) -> Vec<Example> {
$examples
}
fn run(
&self,
_engine_state: &EngineState,
_stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let expr = NuExpression::try_from_pipeline(input, call.head)?;
let expr: NuExpression = expr.into_polars().$func().into();
Ok(PipelineData::Value(
NuExpression::into_value(expr, call.head),
None,
))
}
}
#[cfg(test)]
mod $test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
use crate::dataframe::lazy::aggregate::LazyAggregate;
use crate::dataframe::lazy::groupby::ToLazyGroupBy;
#[test]
fn test_examples() {
test_dataframe(vec![
Box::new($command {}),
Box::new(LazyAggregate {}),
Box::new(ToLazyGroupBy {}),
])
}
}
};
($command: ident, $name: expr, $desc: expr, $examples: expr, $func: ident, $test: ident, $ddof: expr) => {
#[derive(Clone)]
pub struct $command;
impl Command for $command {
fn name(&self) -> &str {
$name
}
fn usage(&self) -> &str {
$desc
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.input_output_type(
Type::Custom("expression".into()),
Type::Custom("expression".into()),
)
.category(Category::Custom("expression".into()))
}
fn examples(&self) -> Vec<Example> {
$examples
}
fn run(
&self,
_engine_state: &EngineState,
_stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let expr = NuExpression::try_from_pipeline(input, call.head)?;
let expr: NuExpression = expr.into_polars().$func($ddof).into();
Ok(PipelineData::Value(
NuExpression::into_value(expr, call.head),
None,
))
}
}
#[cfg(test)]
mod $test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
use crate::dataframe::lazy::aggregate::LazyAggregate;
use crate::dataframe::lazy::groupby::ToLazyGroupBy;
#[test]
fn test_examples() {
test_dataframe(vec![
Box::new($command {}),
Box::new(LazyAggregate {}),
Box::new(ToLazyGroupBy {}),
])
}
}
};
}
// The structs defined in this file are structs that form part of other commands
// since they share a similar name
macro_rules! lazy_expr_command {
($command: ident, $name: expr, $desc: expr, $examples: expr, $func: ident, $test: ident) => {
#[derive(Clone)]
pub struct $command;
impl Command for $command {
fn name(&self) -> &str {
$name
}
fn usage(&self) -> &str {
$desc
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.input_output_types(vec![
(
Type::Custom("expression".into()),
Type::Custom("expression".into()),
),
(
Type::Custom("dataframe".into()),
Type::Custom("dataframe".into()),
),
])
.category(Category::Custom("expression".into()))
}
fn examples(&self) -> Vec<Example> {
$examples
}
fn run(
&self,
_engine_state: &EngineState,
_stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let value = input.into_value(call.head);
if NuDataFrame::can_downcast(&value) {
let lazy = NuLazyFrame::try_from_value(value)?;
let lazy = NuLazyFrame::new(
lazy.from_eager,
lazy.into_polars()
.$func()
.map_err(|e| ShellError::GenericError {
error: "Dataframe Error".into(),
msg: e.to_string(),
help: None,
span: None,
inner: vec![],
})?,
);
Ok(PipelineData::Value(lazy.into_value(call.head)?, None))
} else {
let expr = NuExpression::try_from_value(value)?;
let expr: NuExpression = expr.into_polars().$func().into();
Ok(PipelineData::Value(
NuExpression::into_value(expr, call.head),
None,
))
}
}
}
#[cfg(test)]
mod $test {
use super::super::super::test_dataframe::{
build_test_engine_state, test_dataframe_example,
};
use super::*;
use crate::dataframe::lazy::aggregate::LazyAggregate;
use crate::dataframe::lazy::groupby::ToLazyGroupBy;
#[test]
fn test_examples_dataframe() {
// the first example should be a for the dataframe case
let example = &$command.examples()[0];
let mut engine_state = build_test_engine_state(vec![Box::new($command {})]);
test_dataframe_example(&mut engine_state, &example)
}
#[test]
fn test_examples_expressions() {
// the second example should be a for the dataframe case
let example = &$command.examples()[1];
let mut engine_state = build_test_engine_state(vec![
Box::new($command {}),
Box::new(LazyAggregate {}),
Box::new(ToLazyGroupBy {}),
]);
test_dataframe_example(&mut engine_state, &example)
}
}
};
($command: ident, $name: expr, $desc: expr, $examples: expr, $func: ident, $test: ident, $ddof: expr) => {
#[derive(Clone)]
pub struct $command;
impl Command for $command {
fn name(&self) -> &str {
$name
}
fn usage(&self) -> &str {
$desc
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.input_output_types(vec![
(
Type::Custom("expression".into()),
Type::Custom("expression".into()),
),
(
Type::Custom("dataframe".into()),
Type::Custom("dataframe".into()),
),
])
.category(Category::Custom("expression".into()))
}
fn examples(&self) -> Vec<Example> {
$examples
}
fn run(
&self,
_engine_state: &EngineState,
_stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let value = input.into_value(call.head);
if NuDataFrame::can_downcast(&value) {
let lazy = NuLazyFrame::try_from_value(value)?;
let lazy = NuLazyFrame::new(
lazy.from_eager,
lazy.into_polars()
.$func($ddof)
.map_err(|e| ShellError::GenericError {
error: "Dataframe Error".into(),
msg: e.to_string(),
help: None,
span: None,
inner: vec![],
})?,
);
Ok(PipelineData::Value(lazy.into_value(call.head)?, None))
} else {
let expr = NuExpression::try_from_value(value)?;
let expr: NuExpression = expr.into_polars().$func($ddof).into();
Ok(PipelineData::Value(
NuExpression::into_value(expr, call.head),
None,
))
}
}
}
#[cfg(test)]
mod $test {
use super::super::super::test_dataframe::{
build_test_engine_state, test_dataframe_example,
};
use super::*;
use crate::dataframe::lazy::aggregate::LazyAggregate;
use crate::dataframe::lazy::groupby::ToLazyGroupBy;
#[test]
fn test_examples_dataframe() {
// the first example should be a for the dataframe case
let example = &$command.examples()[0];
let mut engine_state = build_test_engine_state(vec![Box::new($command {})]);
test_dataframe_example(&mut engine_state, &example)
}
#[test]
fn test_examples_expressions() {
// the second example should be a for the dataframe case
let example = &$command.examples()[1];
let mut engine_state = build_test_engine_state(vec![
Box::new($command {}),
Box::new(LazyAggregate {}),
Box::new(ToLazyGroupBy {}),
]);
test_dataframe_example(&mut engine_state, &example)
}
}
};
}
// ExprList command
// Expands to a command definition for a list expression
expr_command!(
ExprList,
"dfr implode",
"Aggregates a group to a Series.",
vec![Example {
description: "",
example: "",
result: None,
}],
implode,
test_implode
);
// ExprAggGroups command
// Expands to a command definition for a agg groups expression
expr_command!(
ExprAggGroups,
"dfr agg-groups",
"Creates an agg_groups expression.",
vec![Example {
description: "",
example: "",
result: None,
}],
agg_groups,
test_groups
);
// ExprCount command
// Expands to a command definition for a count expression
expr_command!(
ExprCount,
"dfr count",
"Creates a count expression.",
vec![Example {
description: "",
example: "",
result: None,
}],
count,
test_count
);
// ExprNot command
// Expands to a command definition for a not expression
expr_command!(
ExprNot,
"dfr expr-not",
"Creates a not expression.",
vec![Example {
description: "Creates a not expression",
example: "(dfr col a) > 2) | dfr expr-not",
result: None,
},],
not,
test_not
);
// ExprMax command
// Expands to a command definition for max aggregation
lazy_expr_command!(
ExprMax,
"dfr max",
"Creates a max expression or aggregates columns to their max value.",
vec![
Example {
description: "Max value from columns in a dataframe",
example: "[[a b]; [6 2] [1 4] [4 1]] | dfr into-df | dfr max",
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new("a".to_string(), vec![Value::test_int(6)],),
Column::new("b".to_string(), vec![Value::test_int(4)],),
],
None
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
Example {
description: "Max aggregation for a group-by",
example: r#"[[a b]; [one 2] [one 4] [two 1]]
| dfr into-df
| dfr group-by a
| dfr agg (dfr col b | dfr max)"#,
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"a".to_string(),
vec![Value::test_string("one"), Value::test_string("two")],
),
Column::new(
"b".to_string(),
vec![Value::test_int(4), Value::test_int(1)],
),
],
None
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
],
max,
test_max
);
// ExprMin command
// Expands to a command definition for min aggregation
lazy_expr_command!(
ExprMin,
"dfr min",
"Creates a min expression or aggregates columns to their min value.",
vec![
Example {
description: "Min value from columns in a dataframe",
example: "[[a b]; [6 2] [1 4] [4 1]] | dfr into-df | dfr min",
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new("a".to_string(), vec![Value::test_int(1)],),
Column::new("b".to_string(), vec![Value::test_int(1)],),
],
None
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
Example {
description: "Min aggregation for a group-by",
example: r#"[[a b]; [one 2] [one 4] [two 1]]
| dfr into-df
| dfr group-by a
| dfr agg (dfr col b | dfr min)"#,
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"a".to_string(),
vec![Value::test_string("one"), Value::test_string("two")],
),
Column::new(
"b".to_string(),
vec![Value::test_int(2), Value::test_int(1)],
),
],
None
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
],
min,
test_min
);
// ExprSum command
// Expands to a command definition for sum aggregation
lazy_expr_command!(
ExprSum,
"dfr sum",
"Creates a sum expression for an aggregation or aggregates columns to their sum value.",
vec![
Example {
description: "Sums all columns in a dataframe",
example: "[[a b]; [6 2] [1 4] [4 1]] | dfr into-df | dfr sum",
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new("a".to_string(), vec![Value::test_int(11)],),
Column::new("b".to_string(), vec![Value::test_int(7)],),
],
None
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
Example {
description: "Sum aggregation for a group-by",
example: r#"[[a b]; [one 2] [one 4] [two 1]]
| dfr into-df
| dfr group-by a
| dfr agg (dfr col b | dfr sum)"#,
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"a".to_string(),
vec![Value::test_string("one"), Value::test_string("two")],
),
Column::new(
"b".to_string(),
vec![Value::test_int(6), Value::test_int(1)],
),
],
None
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
],
sum,
test_sum
);
// ExprMean command
// Expands to a command definition for mean aggregation
lazy_expr_command!(
ExprMean,
"dfr mean",
"Creates a mean expression for an aggregation or aggregates columns to their mean value.",
vec![
Example {
description: "Mean value from columns in a dataframe",
example: "[[a b]; [6 2] [4 2] [2 2]] | dfr into-df | dfr mean",
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new("a".to_string(), vec![Value::test_float(4.0)],),
Column::new("b".to_string(), vec![Value::test_float(2.0)],),
],
None
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
Example {
description: "Mean aggregation for a group-by",
example: r#"[[a b]; [one 2] [one 4] [two 1]]
| dfr into-df
| dfr group-by a
| dfr agg (dfr col b | dfr mean)"#,
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"a".to_string(),
vec![Value::test_string("one"), Value::test_string("two")],
),
Column::new(
"b".to_string(),
vec![Value::test_float(3.0), Value::test_float(1.0)],
),
],
None
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
],
mean,
test_mean
);
// ExprMedian command
// Expands to a command definition for median aggregation
expr_command!(
ExprMedian,
"dfr median",
"Creates a median expression for an aggregation.",
vec![Example {
description: "Median aggregation for a group-by",
example: r#"[[a b]; [one 2] [one 4] [two 1]]
| dfr into-df
| dfr group-by a
| dfr agg (dfr col b | dfr median)"#,
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"a".to_string(),
vec![Value::test_string("one"), Value::test_string("two")],
),
Column::new(
"b".to_string(),
vec![Value::test_float(3.0), Value::test_float(1.0)],
),
],
None
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},],
median,
test_median
);
// ExprStd command
// Expands to a command definition for std aggregation
lazy_expr_command!(
ExprStd,
"dfr std",
"Creates a std expression for an aggregation of std value from columns in a dataframe.",
vec![
Example {
description: "Std value from columns in a dataframe",
example: "[[a b]; [6 2] [4 2] [2 2]] | dfr into-df | dfr std",
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new("a".to_string(), vec![Value::test_float(2.0)],),
Column::new("b".to_string(), vec![Value::test_float(0.0)],),
],
None
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
Example {
description: "Std aggregation for a group-by",
example: r#"[[a b]; [one 2] [one 2] [two 1] [two 1]]
| dfr into-df
| dfr group-by a
| dfr agg (dfr col b | dfr std)"#,
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"a".to_string(),
vec![Value::test_string("one"), Value::test_string("two")],
),
Column::new(
"b".to_string(),
vec![Value::test_float(0.0), Value::test_float(0.0)],
),
],
None
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
],
std,
test_std,
1
);
// ExprVar command
// Expands to a command definition for var aggregation
lazy_expr_command!(
ExprVar,
"dfr var",
"Create a var expression for an aggregation.",
vec![
Example {
description:
"Var value from columns in a dataframe or aggregates columns to their var value",
example: "[[a b]; [6 2] [4 2] [2 2]] | dfr into-df | dfr var",
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new("a".to_string(), vec![Value::test_float(4.0)],),
Column::new("b".to_string(), vec![Value::test_float(0.0)],),
],
None
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
Example {
description: "Var aggregation for a group-by",
example: r#"[[a b]; [one 2] [one 2] [two 1] [two 1]]
| dfr into-df
| dfr group-by a
| dfr agg (dfr col b | dfr var)"#,
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"a".to_string(),
vec![Value::test_string("one"), Value::test_string("two")],
),
Column::new(
"b".to_string(),
vec![Value::test_float(0.0), Value::test_float(0.0)],
),
],
None
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
],
var,
test_var,
1
);

View File

@ -1,116 +0,0 @@
use crate::dataframe::values::{Column, NuDataFrame, NuExpression};
use nu_engine::command_prelude::*;
use polars::prelude::{lit, DataType};
#[derive(Clone)]
pub struct ExprIsIn;
impl Command for ExprIsIn {
fn name(&self) -> &str {
"dfr is-in"
}
fn usage(&self) -> &str {
"Creates an is-in expression."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.required(
"list",
SyntaxShape::List(Box::new(SyntaxShape::Any)),
"List to check if values are in",
)
.input_output_type(
Type::Custom("expression".into()),
Type::Custom("expression".into()),
)
.category(Category::Custom("expression".into()))
}
fn examples(&self) -> Vec<Example> {
vec![Example {
description: "Creates a is-in expression",
example: r#"let df = ([[a b]; [one 1] [two 2] [three 3]] | dfr into-df);
$df | dfr with-column (dfr col a | dfr is-in [one two] | dfr as a_in)"#,
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"a".to_string(),
vec![
Value::test_string("one"),
Value::test_string("two"),
Value::test_string("three"),
],
),
Column::new(
"b".to_string(),
vec![Value::test_int(1), Value::test_int(2), Value::test_int(3)],
),
Column::new(
"a_in".to_string(),
vec![
Value::test_bool(true),
Value::test_bool(true),
Value::test_bool(false),
],
),
],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
}]
}
fn search_terms(&self) -> Vec<&str> {
vec!["check", "contained", "is-contain", "match"]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let list: Vec<Value> = call.req(engine_state, stack, 0)?;
let expr = NuExpression::try_from_pipeline(input, call.head)?;
let values =
NuDataFrame::try_from_columns(vec![Column::new("list".to_string(), list)], None)?;
let list = values.as_series(call.head)?;
if matches!(list.dtype(), DataType::Object(..)) {
return Err(ShellError::IncompatibleParametersSingle {
msg: "Cannot use a mixed list as argument".into(),
span: call.head,
});
}
let expr: NuExpression = expr.into_polars().is_in(lit(list)).into();
Ok(PipelineData::Value(expr.into_value(call.head), None))
}
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
use crate::dataframe::eager::WithColumn;
use crate::dataframe::expressions::alias::ExprAlias;
use crate::dataframe::expressions::col::ExprCol;
#[test]
fn test_examples() {
test_dataframe(vec![
Box::new(ExprIsIn {}),
Box::new(ExprAlias {}),
Box::new(ExprCol {}),
Box::new(WithColumn {}),
])
}
}

View File

@ -1,69 +0,0 @@
use crate::dataframe::values::NuExpression;
use nu_engine::command_prelude::*;
#[derive(Clone)]
pub struct ExprLit;
impl Command for ExprLit {
fn name(&self) -> &str {
"dfr lit"
}
fn usage(&self) -> &str {
"Creates a literal expression."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.required(
"literal",
SyntaxShape::Any,
"literal to construct the expression",
)
.input_output_type(Type::Any, Type::Custom("expression".into()))
.category(Category::Custom("expression".into()))
}
fn examples(&self) -> Vec<Example> {
vec![Example {
description: "Created a literal expression and converts it to a nu object",
example: "dfr lit 2 | dfr into-nu",
result: Some(Value::test_record(record! {
"expr" => Value::test_string("literal"),
"value" => Value::test_string("2"),
})),
}]
}
fn search_terms(&self) -> Vec<&str> {
vec!["string", "literal", "expression"]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
_input: PipelineData,
) -> Result<PipelineData, ShellError> {
let literal: Value = call.req(engine_state, stack, 0)?;
let expr = NuExpression::try_from_value(literal)?;
Ok(PipelineData::Value(
NuExpression::into_value(expr, call.head),
None,
))
}
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
use crate::dataframe::eager::ToNu;
#[test]
fn test_examples() {
test_dataframe(vec![Box::new(ExprLit {}), Box::new(ToNu {})])
}
}

View File

@ -1,62 +0,0 @@
mod alias;
mod arg_where;
mod col;
mod concat_str;
mod datepart;
mod expressions_macro;
mod is_in;
mod lit;
mod otherwise;
mod quantile;
mod when;
use nu_protocol::engine::StateWorkingSet;
pub(crate) use crate::dataframe::expressions::alias::ExprAlias;
use crate::dataframe::expressions::arg_where::ExprArgWhere;
pub(super) use crate::dataframe::expressions::col::ExprCol;
pub(super) use crate::dataframe::expressions::concat_str::ExprConcatStr;
pub(crate) use crate::dataframe::expressions::datepart::ExprDatePart;
pub(crate) use crate::dataframe::expressions::expressions_macro::*;
pub(super) use crate::dataframe::expressions::is_in::ExprIsIn;
pub(super) use crate::dataframe::expressions::lit::ExprLit;
pub(super) use crate::dataframe::expressions::otherwise::ExprOtherwise;
pub(super) use crate::dataframe::expressions::quantile::ExprQuantile;
pub(super) use crate::dataframe::expressions::when::ExprWhen;
pub fn add_expressions(working_set: &mut StateWorkingSet) {
macro_rules! bind_command {
( $command:expr ) => {
working_set.add_decl(Box::new($command));
};
( $( $command:expr ),* ) => {
$( working_set.add_decl(Box::new($command)); )*
};
}
// Dataframe commands
bind_command!(
ExprAlias,
ExprArgWhere,
ExprCol,
ExprConcatStr,
ExprCount,
ExprLit,
ExprWhen,
ExprOtherwise,
ExprQuantile,
ExprList,
ExprAggGroups,
ExprCount,
ExprIsIn,
ExprNot,
ExprMax,
ExprMin,
ExprSum,
ExprMean,
ExprMedian,
ExprStd,
ExprVar,
ExprDatePart
);
}

View File

@ -1,126 +0,0 @@
use crate::dataframe::values::{Column, NuDataFrame, NuExpression, NuWhen};
use nu_engine::command_prelude::*;
#[derive(Clone)]
pub struct ExprOtherwise;
impl Command for ExprOtherwise {
fn name(&self) -> &str {
"dfr otherwise"
}
fn usage(&self) -> &str {
"Completes a when expression."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.required(
"otherwise expression",
SyntaxShape::Any,
"expression to apply when no when predicate matches",
)
.input_output_type(Type::Any, Type::Custom("expression".into()))
.category(Category::Custom("expression".into()))
}
fn examples(&self) -> Vec<Example> {
vec![
Example {
description: "Create a when conditions",
example: "dfr when ((dfr col a) > 2) 4 | dfr otherwise 5",
result: None,
},
Example {
description: "Create a when conditions",
example:
"dfr when ((dfr col a) > 2) 4 | dfr when ((dfr col a) < 0) 6 | dfr otherwise 0",
result: None,
},
Example {
description: "Create a new column for the dataframe",
example: r#"[[a b]; [6 2] [1 4] [4 1]]
| dfr into-lazy
| dfr with-column (
dfr when ((dfr col a) > 2) 4 | dfr otherwise 5 | dfr as c
)
| dfr with-column (
dfr when ((dfr col a) > 5) 10 | dfr when ((dfr col a) < 2) 6 | dfr otherwise 0 | dfr as d
)
| dfr collect"#,
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"a".to_string(),
vec![Value::test_int(6), Value::test_int(1), Value::test_int(4)],
),
Column::new(
"b".to_string(),
vec![Value::test_int(2), Value::test_int(4), Value::test_int(1)],
),
Column::new(
"c".to_string(),
vec![Value::test_int(4), Value::test_int(5), Value::test_int(4)],
),
Column::new(
"d".to_string(),
vec![Value::test_int(10), Value::test_int(6), Value::test_int(0)],
),
],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
]
}
fn search_terms(&self) -> Vec<&str> {
vec!["condition", "else"]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let otherwise_predicate: Value = call.req(engine_state, stack, 0)?;
let otherwise_predicate = NuExpression::try_from_value(otherwise_predicate)?;
let value = input.into_value(call.head);
let complete: NuExpression = match NuWhen::try_from_value(value)? {
NuWhen::Then(then) => then.otherwise(otherwise_predicate.into_polars()).into(),
NuWhen::ChainedThen(chained_when) => chained_when
.otherwise(otherwise_predicate.into_polars())
.into(),
};
Ok(PipelineData::Value(complete.into_value(call.head), None))
}
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use crate::dataframe::eager::{ToNu, WithColumn};
use crate::dataframe::expressions::when::ExprWhen;
use crate::dataframe::expressions::{ExprAlias, ExprCol};
use super::*;
#[test]
fn test_examples() {
test_dataframe(vec![
Box::new(WithColumn {}),
Box::new(ExprCol {}),
Box::new(ExprAlias {}),
Box::new(ExprWhen {}),
Box::new(ExprOtherwise {}),
Box::new(ToNu {}),
])
}
}

View File

@ -1,101 +0,0 @@
use crate::dataframe::values::{Column, NuDataFrame, NuExpression};
use nu_engine::command_prelude::*;
use polars::prelude::{lit, QuantileInterpolOptions};
#[derive(Clone)]
pub struct ExprQuantile;
impl Command for ExprQuantile {
fn name(&self) -> &str {
"dfr quantile"
}
fn usage(&self) -> &str {
"Aggregates the columns to the selected quantile."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.required(
"quantile",
SyntaxShape::Number,
"quantile value for quantile operation",
)
.input_output_type(
Type::Custom("expression".into()),
Type::Custom("expression".into()),
)
.category(Category::Custom("expression".into()))
}
fn examples(&self) -> Vec<Example> {
vec![Example {
description: "Quantile aggregation for a group-by",
example: r#"[[a b]; [one 2] [one 4] [two 1]]
| dfr into-df
| dfr group-by a
| dfr agg (dfr col b | dfr quantile 0.5)"#,
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"a".to_string(),
vec![Value::test_string("one"), Value::test_string("two")],
),
Column::new(
"b".to_string(),
vec![Value::test_float(4.0), Value::test_float(1.0)],
),
],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
}]
}
fn search_terms(&self) -> Vec<&str> {
vec!["statistics", "percentile", "distribution"]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let value = input.into_value(call.head);
let quantile: f64 = call.req(engine_state, stack, 0)?;
let expr = NuExpression::try_from_value(value)?;
let expr: NuExpression = expr
.into_polars()
.quantile(lit(quantile), QuantileInterpolOptions::default())
.into();
Ok(PipelineData::Value(
NuExpression::into_value(expr, call.head),
None,
))
}
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
use crate::dataframe::lazy::aggregate::LazyAggregate;
use crate::dataframe::lazy::groupby::ToLazyGroupBy;
#[test]
fn test_examples() {
test_dataframe(vec![
Box::new(ExprQuantile {}),
Box::new(LazyAggregate {}),
Box::new(ToLazyGroupBy {}),
])
}
}

View File

@ -1,147 +0,0 @@
use crate::dataframe::values::{Column, NuDataFrame, NuExpression, NuWhen};
use nu_engine::command_prelude::*;
use polars::prelude::when;
#[derive(Clone)]
pub struct ExprWhen;
impl Command for ExprWhen {
fn name(&self) -> &str {
"dfr when"
}
fn usage(&self) -> &str {
"Creates and modifies a when expression."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.required(
"when expression",
SyntaxShape::Any,
"when expression used for matching",
)
.required(
"then expression",
SyntaxShape::Any,
"expression that will be applied when predicate is true",
)
.input_output_type(
Type::Custom("expression".into()),
Type::Custom("expression".into()),
)
.category(Category::Custom("expression".into()))
}
fn examples(&self) -> Vec<Example> {
vec![
Example {
description: "Create a when conditions",
example: "dfr when ((dfr col a) > 2) 4",
result: None,
},
Example {
description: "Create a when conditions",
example: "dfr when ((dfr col a) > 2) 4 | dfr when ((dfr col a) < 0) 6",
result: None,
},
Example {
description: "Create a new column for the dataframe",
example: r#"[[a b]; [6 2] [1 4] [4 1]]
| dfr into-lazy
| dfr with-column (
dfr when ((dfr col a) > 2) 4 | dfr otherwise 5 | dfr as c
)
| dfr with-column (
dfr when ((dfr col a) > 5) 10 | dfr when ((dfr col a) < 2) 6 | dfr otherwise 0 | dfr as d
)
| dfr collect"#,
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"a".to_string(),
vec![Value::test_int(6), Value::test_int(1), Value::test_int(4)],
),
Column::new(
"b".to_string(),
vec![Value::test_int(2), Value::test_int(4), Value::test_int(1)],
),
Column::new(
"c".to_string(),
vec![Value::test_int(4), Value::test_int(5), Value::test_int(4)],
),
Column::new(
"d".to_string(),
vec![Value::test_int(10), Value::test_int(6), Value::test_int(0)],
),
],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
]
}
fn search_terms(&self) -> Vec<&str> {
vec!["condition", "match", "if", "else"]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let when_predicate: Value = call.req(engine_state, stack, 0)?;
let when_predicate = NuExpression::try_from_value(when_predicate)?;
let then_predicate: Value = call.req(engine_state, stack, 1)?;
let then_predicate = NuExpression::try_from_value(then_predicate)?;
let value = input.into_value(call.head);
let when_then: NuWhen = match value {
Value::Nothing { .. } => when(when_predicate.into_polars())
.then(then_predicate.into_polars())
.into(),
v => match NuWhen::try_from_value(v)? {
NuWhen::Then(when_then) => when_then
.when(when_predicate.into_polars())
.then(then_predicate.into_polars())
.into(),
NuWhen::ChainedThen(when_then_then) => when_then_then
.when(when_predicate.into_polars())
.then(then_predicate.into_polars())
.into(),
},
};
Ok(PipelineData::Value(when_then.into_value(call.head), None))
}
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use crate::dataframe::eager::{ToNu, WithColumn};
use crate::dataframe::expressions::otherwise::ExprOtherwise;
use crate::dataframe::expressions::{ExprAlias, ExprCol};
use super::*;
#[test]
fn test_examples() {
test_dataframe(vec![
Box::new(WithColumn {}),
Box::new(ExprCol {}),
Box::new(ExprAlias {}),
Box::new(ExprWhen {}),
Box::new(ExprOtherwise {}),
Box::new(ToNu {}),
])
}
}

View File

@ -1,216 +0,0 @@
use crate::dataframe::values::{Column, NuDataFrame, NuExpression, NuLazyFrame, NuLazyGroupBy};
use nu_engine::command_prelude::*;
use polars::{datatypes::DataType, prelude::Expr};
#[derive(Clone)]
pub struct LazyAggregate;
impl Command for LazyAggregate {
fn name(&self) -> &str {
"dfr agg"
}
fn usage(&self) -> &str {
"Performs a series of aggregations from a group-by."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.rest(
"Group-by expressions",
SyntaxShape::Any,
"Expression(s) that define the aggregations to be applied",
)
.input_output_type(
Type::Custom("dataframe".into()),
Type::Custom("dataframe".into()),
)
.category(Category::Custom("lazyframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![
Example {
description: "Group by and perform an aggregation",
example: r#"[[a b]; [1 2] [1 4] [2 6] [2 4]]
| dfr into-df
| dfr group-by a
| dfr agg [
(dfr col b | dfr min | dfr as "b_min")
(dfr col b | dfr max | dfr as "b_max")
(dfr col b | dfr sum | dfr as "b_sum")
]"#,
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"a".to_string(),
vec![Value::test_int(1), Value::test_int(2)],
),
Column::new(
"b_min".to_string(),
vec![Value::test_int(2), Value::test_int(4)],
),
Column::new(
"b_max".to_string(),
vec![Value::test_int(4), Value::test_int(6)],
),
Column::new(
"b_sum".to_string(),
vec![Value::test_int(6), Value::test_int(10)],
),
],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
Example {
description: "Group by and perform an aggregation",
example: r#"[[a b]; [1 2] [1 4] [2 6] [2 4]]
| dfr into-lazy
| dfr group-by a
| dfr agg [
(dfr col b | dfr min | dfr as "b_min")
(dfr col b | dfr max | dfr as "b_max")
(dfr col b | dfr sum | dfr as "b_sum")
]
| dfr collect"#,
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"a".to_string(),
vec![Value::test_int(1), Value::test_int(2)],
),
Column::new(
"b_min".to_string(),
vec![Value::test_int(2), Value::test_int(4)],
),
Column::new(
"b_max".to_string(),
vec![Value::test_int(4), Value::test_int(6)],
),
Column::new(
"b_sum".to_string(),
vec![Value::test_int(6), Value::test_int(10)],
),
],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let vals: Vec<Value> = call.rest(engine_state, stack, 0)?;
let value = Value::list(vals, call.head);
let expressions = NuExpression::extract_exprs(value)?;
let group_by = NuLazyGroupBy::try_from_pipeline(input, call.head)?;
if let Some(schema) = &group_by.schema {
for expr in expressions.iter() {
if let Some(name) = get_col_name(expr) {
let dtype = schema.get(name.as_str());
if matches!(dtype, Some(DataType::Object(..))) {
return Err(ShellError::GenericError {
error: "Object type column not supported for aggregation".into(),
msg: format!("Column '{name}' is type Object"),
span: Some(call.head),
help: Some("Aggregations cannot be performed on Object type columns. Use dtype command to check column types".into()),
inner: vec![],
});
}
}
}
}
let lazy = NuLazyFrame {
from_eager: group_by.from_eager,
lazy: Some(group_by.into_polars().agg(&expressions)),
schema: None,
};
let res = lazy.into_value(call.head)?;
Ok(PipelineData::Value(res, None))
}
}
fn get_col_name(expr: &Expr) -> Option<String> {
match expr {
Expr::Column(column) => Some(column.to_string()),
Expr::Agg(agg) => match agg {
polars::prelude::AggExpr::Min { input: e, .. }
| polars::prelude::AggExpr::Max { input: e, .. }
| polars::prelude::AggExpr::Median(e)
| polars::prelude::AggExpr::NUnique(e)
| polars::prelude::AggExpr::First(e)
| polars::prelude::AggExpr::Last(e)
| polars::prelude::AggExpr::Mean(e)
| polars::prelude::AggExpr::Implode(e)
| polars::prelude::AggExpr::Count(e, _)
| polars::prelude::AggExpr::Sum(e)
| polars::prelude::AggExpr::AggGroups(e)
| polars::prelude::AggExpr::Std(e, _)
| polars::prelude::AggExpr::Var(e, _) => get_col_name(e.as_ref()),
polars::prelude::AggExpr::Quantile { expr, .. } => get_col_name(expr.as_ref()),
},
Expr::Filter { input: expr, .. }
| Expr::Slice { input: expr, .. }
| Expr::Cast { expr, .. }
| Expr::Sort { expr, .. }
| Expr::Gather { expr, .. }
| Expr::SortBy { expr, .. }
| Expr::Exclude(expr, _)
| Expr::Alias(expr, _)
| Expr::KeepName(expr)
| Expr::Explode(expr) => get_col_name(expr.as_ref()),
Expr::Ternary { .. }
| Expr::AnonymousFunction { .. }
| Expr::Function { .. }
| Expr::Columns(_)
| Expr::DtypeColumn(_)
| Expr::Literal(_)
| Expr::BinaryExpr { .. }
| Expr::Window { .. }
| Expr::Wildcard
| Expr::RenameAlias { .. }
| Expr::Len
| Expr::Nth(_)
| Expr::SubPlan(_, _)
| Expr::Selector(_) => None,
}
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
use crate::dataframe::expressions::{ExprAlias, ExprMax, ExprMin, ExprSum};
use crate::dataframe::lazy::groupby::ToLazyGroupBy;
#[test]
fn test_examples() {
test_dataframe(vec![
Box::new(LazyAggregate {}),
Box::new(ToLazyGroupBy {}),
Box::new(ExprAlias {}),
Box::new(ExprMin {}),
Box::new(ExprMax {}),
Box::new(ExprSum {}),
])
}
}

View File

@ -1,73 +0,0 @@
use crate::dataframe::values::{Column, NuDataFrame, NuLazyFrame};
use nu_engine::command_prelude::*;
#[derive(Clone)]
pub struct LazyCollect;
impl Command for LazyCollect {
fn name(&self) -> &str {
"dfr collect"
}
fn usage(&self) -> &str {
"Collect lazy dataframe into eager dataframe."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.input_output_type(
Type::Custom("dataframe".into()),
Type::Custom("dataframe".into()),
)
.category(Category::Custom("lazyframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![Example {
description: "drop duplicates",
example: "[[a b]; [1 2] [3 4]] | dfr into-lazy | dfr collect",
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"a".to_string(),
vec![Value::test_int(1), Value::test_int(3)],
),
Column::new(
"b".to_string(),
vec![Value::test_int(2), Value::test_int(4)],
),
],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
}]
}
fn run(
&self,
_engine_state: &EngineState,
_stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let lazy = NuLazyFrame::try_from_pipeline(input, call.head)?;
let eager = lazy.collect(call.head)?;
let value = Value::custom(Box::new(eager), call.head);
Ok(PipelineData::Value(value, None))
}
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
#[test]
fn test_examples() {
test_dataframe(vec![Box::new(LazyCollect {})])
}
}

View File

@ -1,153 +0,0 @@
use crate::dataframe::values::{Column, NuDataFrame, NuExpression, NuLazyFrame};
use nu_engine::command_prelude::*;
#[derive(Clone)]
pub struct LazyExplode;
impl Command for LazyExplode {
fn name(&self) -> &str {
"dfr explode"
}
fn usage(&self) -> &str {
"Explodes a dataframe or creates a explode expression."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.rest(
"columns",
SyntaxShape::String,
"columns to explode, only applicable for dataframes",
)
.input_output_types(vec![
(
Type::Custom("expression".into()),
Type::Custom("expression".into()),
),
(
Type::Custom("dataframe".into()),
Type::Custom("dataframe".into()),
),
])
.category(Category::Custom("lazyframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![
Example {
description: "Explode the specified dataframe",
example: "[[id name hobbies]; [1 Mercy [Cycling Knitting]] [2 Bob [Skiing Football]]] | dfr into-df | dfr explode hobbies | dfr collect",
result: Some(
NuDataFrame::try_from_columns(vec![
Column::new(
"id".to_string(),
vec![
Value::test_int(1),
Value::test_int(1),
Value::test_int(2),
Value::test_int(2),
]),
Column::new(
"name".to_string(),
vec![
Value::test_string("Mercy"),
Value::test_string("Mercy"),
Value::test_string("Bob"),
Value::test_string("Bob"),
]),
Column::new(
"hobbies".to_string(),
vec![
Value::test_string("Cycling"),
Value::test_string("Knitting"),
Value::test_string("Skiing"),
Value::test_string("Football"),
]),
], None).expect("simple df for test should not fail")
.into_value(Span::test_data()),
)
},
Example {
description: "Select a column and explode the values",
example: "[[id name hobbies]; [1 Mercy [Cycling Knitting]] [2 Bob [Skiing Football]]] | dfr into-df | dfr select (dfr col hobbies | dfr explode)",
result: Some(
NuDataFrame::try_from_columns(vec![
Column::new(
"hobbies".to_string(),
vec![
Value::test_string("Cycling"),
Value::test_string("Knitting"),
Value::test_string("Skiing"),
Value::test_string("Football"),
]),
], None).expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
},
]
}
fn run(
&self,
_engine_state: &EngineState,
_stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
explode(call, input)
}
}
pub(crate) fn explode(call: &Call, input: PipelineData) -> Result<PipelineData, ShellError> {
let value = input.into_value(call.head);
if NuDataFrame::can_downcast(&value) {
let df = NuLazyFrame::try_from_value(value)?;
let columns: Vec<String> = call
.positional_iter()
.filter_map(|e| e.as_string())
.collect();
let exploded = df
.into_polars()
.explode(columns.iter().map(AsRef::as_ref).collect::<Vec<&str>>());
Ok(PipelineData::Value(
NuLazyFrame::from(exploded).into_value(call.head)?,
None,
))
} else {
let expr = NuExpression::try_from_value(value)?;
let expr: NuExpression = expr.into_polars().explode().into();
Ok(PipelineData::Value(
NuExpression::into_value(expr, call.head),
None,
))
}
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::{build_test_engine_state, test_dataframe_example};
use super::*;
use crate::dataframe::lazy::aggregate::LazyAggregate;
use crate::dataframe::lazy::groupby::ToLazyGroupBy;
#[test]
fn test_examples_dataframe() {
let mut engine_state = build_test_engine_state(vec![Box::new(LazyExplode {})]);
test_dataframe_example(&mut engine_state, &LazyExplode.examples()[0]);
}
#[ignore]
#[test]
fn test_examples_expression() {
let mut engine_state = build_test_engine_state(vec![
Box::new(LazyExplode {}),
Box::new(LazyAggregate {}),
Box::new(ToLazyGroupBy {}),
]);
test_dataframe_example(&mut engine_state, &LazyExplode.examples()[1]);
}
}

View File

@ -1,92 +0,0 @@
use crate::dataframe::values::{Column, NuDataFrame, NuLazyFrame};
use nu_engine::command_prelude::*;
#[derive(Clone)]
pub struct LazyFetch;
impl Command for LazyFetch {
fn name(&self) -> &str {
"dfr fetch"
}
fn usage(&self) -> &str {
"Collects the lazyframe to the selected rows."
}
fn signature(&self) -> Signature {
Signature::build(self.name())
.required(
"rows",
SyntaxShape::Int,
"number of rows to be fetched from lazyframe",
)
.input_output_type(
Type::Custom("dataframe".into()),
Type::Custom("dataframe".into()),
)
.category(Category::Custom("lazyframe".into()))
}
fn examples(&self) -> Vec<Example> {
vec![Example {
description: "Fetch a rows from the dataframe",
example: "[[a b]; [6 2] [4 2] [2 2]] | dfr into-df | dfr fetch 2",
result: Some(
NuDataFrame::try_from_columns(
vec![
Column::new(
"a".to_string(),
vec![Value::test_int(6), Value::test_int(4)],
),
Column::new(
"b".to_string(),
vec![Value::test_int(2), Value::test_int(2)],
),
],
None,
)
.expect("simple df for test should not fail")
.into_value(Span::test_data()),
),
}]
}
fn run(
&self,
engine_state: &EngineState,
stack: &mut Stack,
call: &Call,
input: PipelineData,
) -> Result<PipelineData, ShellError> {
let rows: i64 = call.req(engine_state, stack, 0)?;
let lazy = NuLazyFrame::try_from_pipeline(input, call.head)?;
let eager: NuDataFrame = lazy
.into_polars()
.fetch(rows as usize)
.map_err(|e| ShellError::GenericError {
error: "Error fetching rows".into(),
msg: e.to_string(),
span: Some(call.head),
help: None,
inner: vec![],
})?
.into();
Ok(PipelineData::Value(
NuDataFrame::into_value(eager, call.head),
None,
))
}
}
#[cfg(test)]
mod test {
use super::super::super::test_dataframe::test_dataframe;
use super::*;
#[test]
fn test_examples() {
test_dataframe(vec![Box::new(LazyFetch {})])
}
}

Some files were not shown because too many files have changed in this diff Show More