# Description
- Plugin signatures are now saved to `plugin.msgpackz`, which is
brotli-compressed MessagePack.
- The file is updated incrementally, rather than writing all plugin
commands in the engine every time.
- The file always contains the result of the `Signature` call to the
plugin, even if commands were removed.
- Invalid data for a particular plugin just causes an error to be
reported, but the rest of the plugins can still be parsed
# User-Facing Changes
- The plugin file has a different filename, and it's not a nushell
script.
- The default `plugin.nu` file will be automatically migrated the first
time, but not other plugin config files.
- We don't currently provide any utilities that could help edit this
file, beyond `plugin add` and `plugin rm`
- `from msgpackz`, `to msgpackz` could also help
- New commands: `plugin add`, `plugin rm`
# Tests + Formatting
Tests added for the format and for the invalid handling.
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
- [ ] Check for documentation changes
- [ ] Definitely needs release notes
# Description
`Value` describes the types of first-class values that users and scripts
can create, manipulate, pass around, and store. However, `Block`s are
not first-class values in the language, so this PR removes it from
`Value`. This removes some unnecessary code, and this change should be
invisible to the user except for the change to `scope modules` described
below.
# User-Facing Changes
Breaking change: the output of `scope modules` was changed so that
`env_block` is now `has_env_block` which is a boolean value instead of a
`Block`.
# After Submitting
Update the language guide possibly.
# Description
For a long time, I was searching for the `str extract` command to
extract regexes from strings. I often painfully used `str replace -r
'(.*)(pattern_to_find)(.*)' '$2'` for such purposes.
Only this morning did I realize that `parse` is what I needed for so
many times, which I had only used for parsing data in tables.
# Description
Fix: #12489
I believe the issue it's because the default value of `NU_LIB_DIRS` and
`NU_PLUGIN_DIRS` is a string, but it should be a list.
So if users don't set up these values in `env.nu`, we will get a
problem.
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx
you can also mention related issues, PRs or discussions!
-->
Closes#12561
# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.
Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
Add version details in `version`'s output. The intended use is for
third-party tools to be able to quickly check version numbers without
having to the parsing from `(version).version` or `$env.NU_VERSION`.
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
This adds 4 new values to the record from `version`:
```
...
│ major │ 0 │
│ minor │ 92 │
│ patch │ 3 │
│ pre │ a-value │
...
```
`pre` is optional and won't be present most of the time I think.
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
I ran the new command using `cargo run -- -c version`:
```
╭────────────────────┬─────────────────────────────────────────────────╮
│ version │ 0.92.3 │
│ major │ 0 │
│ minor │ 92 │
│ patch │ 3 │
│ branch │ │
│ commit_hash │ │
│ build_os │ macos-aarch64 │
│ build_target │ aarch64-apple-darwin │
│ rust_version │ rustc 1.77.2 (25ef9e3d8 2024-04-09) │
│ rust_channel │ 1.77.2-aarch64-apple-darwin │
│ cargo_version │ cargo 1.77.2 (e52e36006 2024-03-26) │
│ build_time │ 2024-04-20 15:09:36 +02:00 │
│ build_rust_channel │ release │
│ allocator │ mimalloc │
│ features │ default, sqlite, system-clipboard, trash, which │
│ installed_plugins │ │
╰────────────────────┴─────────────────────────────────────────────────╯
```
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
After this is merged I would like to write docs somewhere for scripts
writer to advise using these members instead of the string values. Where
should I put said docs ?
# Description
This PR improves the `nu --lsp` tooltips by using nu code blocks around
the examples and a few other places.
This is what it looks like in Zed.
![Screenshot 2024-04-19 at 8 20
53 PM](https://github.com/nushell/nushell/assets/343840/20d51dcc-f3b2-4f2b-9d43-5817dd3913df)
Here it is in Helix.
![image](https://github.com/nushell/nushell/assets/343840/a9e7d6b9-cd21-4a5a-9c88-9af17a2b2363)
This coloring is far from perfect, but it's what the tree-sitter-nu
queries generate.
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
# Description
in order to change the style of the _serialized_ NUON data,
`nuon::to_nuon` takes three mutually exclusive arguments, `raw: bool`,
`tabs: Option<usize>` and `indent: Option<usize>` 🤔
this begs to use an enumeration with all possible alternatives, right?
this PR changes the signature of `nuon::to_nuon` to use `nuon::ToStyle`
which has three variants
- `Raw`: no newlines
- `Tabs(n: usize)`: newlines and `n` tabulations as indent
- `Spaces(n: usize)`: newlines and `n` spaces as indent
# User-Facing Changes
the signature of `nuon::to_nuon` changes from
```rust
to_nuon(
input: &Value,
raw: bool,
tabs: Option<usize>,
indent: Option<usize>,
span: Option<Span>,
) -> Result<String, ShellError>
```
to
```rust
to_nuon(
input: &Value,
style: ToStyle,
span: Option<Span>
) -> Result<String, ShellError>
```
# Tests + Formatting
# After Submitting
# Description
Close: #12514
# User-Facing Changes
`^ls | skip 1` will raise an error
```nushell
❯ ^ls | skip 1
Error: nu:🐚:only_supports_this_input_type
× Input type not supported.
╭─[entry #1:1:2]
1 │ ^ls | skip 1
· ─┬ ──┬─
· │ ╰── only list, binary or range input data is supported
· ╰── input type: raw data
╰────
```
# Tests + Formatting
Sorry I can't add it because of the issue:
https://github.com/nushell/nushell/issues/12558
# After Submitting
Nan
# Description
In conflict with the documentation, the msgpack serializer for plugins
is actually using the compact format, which doesn't name struct fields,
and is instead dependent on their ordering, rendering them as tuples.
This is not a good idea for a robust protocol even if it makes the
serialization and deserialization faster.
I expect this to have some impact on performance, but I think the
robustness is probably worth it.
Deserialization always accepts either format, so this shouldn't cause
too many incompatibilities.
# User-Facing Changes
This does technically change the protocol, but it makes it reflect the
documentation. It shouldn't break deserialization, so plugins shouldn't
necessarily need a recompile.
Performance is likely worse and I should benchmark the difference.
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# Description
The polars dtype command is largerly redundant since the introduction of
the schema command. The schema command also has the added benefit that
it's output can be used as a parameter to other schema commands:
```nushell
[[a b]; [5 6] [5 7]] | polars into-df -s ($df | polars schema
```
# User-Facing Changes
`polars dtypes` has been removed. Users should use `polars schema`
instead.
Co-authored-by: Jack Wright <jack.wright@disqo.com>
# Description
playing with the NUON format in Rust code in some plugins, we agreed
with the team it was a great time to create a standalone NUON format to
allow Rust devs to use this Nushell file format.
> **Note**
> this PR almost copy-pastes the code from
`nu_commands/src/formats/from/nuon.rs` and
`nu_commands/src/formats/to/nuon.rs` to `nuon/src/from.rs` and
`nuon/src/to.rs`, with minor tweaks to make then standalone functions,
e.g. remove the rest of the command implementations
### TODO
- [x] add tests
- [x] add documentation
# User-Facing Changes
devs will have access to a new crate, `nuon`, and two functions,
`from_nuon` and `to_nuon`
```rust
from_nuon(
input: &str,
span: Option<Span>,
) -> Result<Value, ShellError>
```
```rust
to_nuon(
input: &Value,
raw: bool,
tabs: Option<usize>,
indent: Option<usize>,
span: Option<Span>,
) -> Result<String, ShellError>
```
# Tests + Formatting
i've basically taken all the tests from
`crates/nu-command/tests/format_conversions/nuon.rs` and converted them
to use `from_nuon` and `to_nuon` instead of Nushell commands
- i've created a `nuon_end_to_end` to run both conversions with an
optional middle value to check that all is fine
> **Note**
> the `nuon::tests::read_code_should_fail_rather_than_panic` test does
give different results locally and in the CI...
> i've left it ignored with comments to help future us :)
# After Submitting
mention that in the release notes for sure!!
# Description
As suggested by @fdncred.
It's neat that this is possible, but the particularly useful part of
this is that we can actually
test it because it doesn't have any external dependencies, unlike the
python plugin.
Right now this just implements exactly the same behavior as the python
plugin, but we could have it
exercise a few more things.
Also fixes a couple of bugs:
- `.nu` plugins were not run with `nu --stdin`, so they couldn't take
input.
- `register` couldn't be called if `--no-config-file` was set, because
it would error on trying to
update the plugin file.
# User-Facing Changes
- `nu_plugin_nu_example` plugin added.
- `register` now works in `--no-config-file` mode.
# Tests + Formatting
Tests added for `nu_plugin_nu_example`.
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
- [ ] Add the version bump to the release script just like for python
# Description
EngineState now tracks the script currently running, instead of the
parent directory of the script. This also provides an easy way to expose
the current running script to the user (Issue #12195).
Similarly, StateWorkingSet now tracks scripts instead of directories.
`parsed_module_files` and `currently_parsed_pwd` are merged into one
variable, `scripts`, which acts like a stack for tracking the current
running script (which is on the top of the stack).
Circular import check is added for `source` operations, in addition to
module import. A simple testcase is added for circular source.
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx
you can also mention related issues, PRs or discussions!
-->
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.
Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
It shouldn't have any user facing changes.
Should close#10833 — though I'd imagine that should have already been
closed.
# Description
Very minor tweak, but it was quite noticeable when using Zellij which
relies on OSC 2 to set pane titles. Before the change:
![image](https://github.com/nushell/nushell/assets/6251883/b944bbce-2040-4886-9955-3c5b57d368e9)
Note that the default `Pane #1` is still showing for the untouched
shell, but running a command like `htop` or `ls` correctly sets the
title during / afterwards.
After this PR:
![image](https://github.com/nushell/nushell/assets/6251883/dd513cfe-923c-450f-b0f2-c66938b0d6f0)
There are now no-longer any unset titles — even if the shell hasn't been
touched.
**As an aside:** I feel quite strongly that (at least OSC 2) shell
integration should be enabled by default, as it is for every other Linux
shell I've used, but I'm not sure which issues that caused that the
default config refers to? Which terminals are broken by shell
integration, and could some of the shell integrations be turned on by
default after splitting things into sub-options as suggested in #11301 ?
# User-Facing Changes
You'll just have shell integrations working from right after the shell
has been launched, instead of needing to run something first.
# Tests + Formatting
Not quite sure how to test this one? Are there any other tests that
currently exist for shell integration? I couldn't quite track them
down...
# After Submitting
Let me know if you think this needs any user-facing docs changes!
# Description
This PR adds the ability to set metadata. This is especially useful for
activating LS_COLORS when using table literals.
![image](https://github.com/nushell/nushell/assets/343840/feef6433-f592-43ea-890a-38cb2df35686)
You can also set the filepath metadata, although I'm not really user how
useful this is. We may end up removing this option entirely.
```nushell
❯ "crates" | metadata set --datasource-filepath $'(pwd)/crates' | metadata
╭────────┬───────────────────────────────────╮
│ source │ /Users/fdncred/src/nushell/crates │
╰────────┴───────────────────────────────────╯
```
No file paths are checked. You could also do this.
```nushell
❯ "crates" | metadata set --datasource-filepath $'a/b/c/d/crates' | metadata
╭────────┬────────────────╮
│ source │ a/b/c/d/crates │
╰────────┴────────────────╯
```
The command name and parameter names are still WIP. We could change
them.
There are currently 3 kinds of metadata in nushell.
```rust
pub enum DataSource {
Ls,
HtmlThemes,
FilePath(PathBuf),
}
```
I've skipped adding `HtmlThemes` because it seems to be specific to our
`to html` command only.
I had previously changed NuLazyFrame::collect to set the NuDataFrame's
from_lazy field to false to prevent conversion back to a lazy frame. It
appears there are cases where this should happen. Instead, I am only
setting from_lazy=false inside the `polars collect` command.
[Related discord
message](https://discord.com/channels/601130461678272522/1227612017171501136/1230600465159421993)
Co-authored-by: Jack Wright <jack.wright@disqo.com>
# Description
Adds a `Box` around the `ImportPattern` in `Expr` which decreases the
size of `Expr` from 152 to 64 bytes (and `Expression` from 216 to 128
bytes). This seems to speed up parsing a little bit according to the
benchmarks (main is top, PR is bottom):
```
benchmarks fastest │ slowest │ median │ mean │ samples │ iters
benchmarks fastest │ slowest │ median │ mean │ samples │ iters
├─ parser_benchmarks │ │ │ │ │
├─ parser_benchmarks │ │ │ │ │
│ ├─ parse_default_config_file 2.287 ms │ 4.532 ms │ 2.311 ms │ 2.437 ms │ 100 │ 100
│ ├─ parse_default_config_file 2.255 ms │ 2.781 ms │ 2.281 ms │ 2.312 ms │ 100 │ 100
│ ╰─ parse_default_env_file 421.8 µs │ 824.6 µs │ 494.3 µs │ 527.5 µs │ 100 │ 100
│ ╰─ parse_default_env_file 402 µs │ 486.6 µs │ 414.8 µs │ 416.2 µs │ 100 │ 100
```
# Description
Remove unused/effect-less features, make sure we show all relevant
features in `version`
# User-Facing Changes
- **Remove unused feature `wasi`**
- will cause failure to build should you enable it. Otherwise no effect
- **Include feat `system-clipboard` in `version`**
# Description
This PR adds a `ListItem` enum to our set of AST types. It encodes the
two possible expressions inside of list expression: a singular item or a
spread. This is similar to the existing `RecordItem` enum. Adding
`ListItem` allows us to remove the existing `Expr::Spread` case which
was previously used for list spreads. As a consequence, this guarantees
(via the type system) that spreads can only ever occur inside lists,
records, or as command args.
This PR also does a little bit of cleanup in relevant parser code.
# Description
Duration can not be negative, and an underflow causes a panic.
This should fix#12539 as from what I can tell that bug was caused in
`nu-explore:📟:events` from subtracting durations, but I figured
this might be more widespread, and saturating to zero generally makes
sense.
I also added the relevant clippy lint to try to prevent this from
happening in the future. I can't think of a reason we would ever want to
subtract durations without checking first.
cc @fdncred
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# Description
The implementation of this function had a few issues before:
- It didn't check that the `cp` pointer is actually ahead of the `start`
pointer, so `len` could potentially underflow and wrap around, which
would be a violation of memory safety
- It used `Vec::from_raw_parts` even though the buffer is borrowed, not
owned. Although `std::mem::forget` is used later to ensure the
destructor doesn't run, there is a risk that the destructor would run if
a panic happened during `String::from_utf8_unchecked`, which would lead
to a `free()` of a pointer we don't own
Bumps [rmp-serde](https://github.com/3Hren/msgpack-rust) from 1.1.2 to
1.2.0.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/3Hren/msgpack-rust/commits/rmp-serde/v1.2.0">compare
view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=rmp-serde&package-manager=cargo&previous-version=1.1.2&new-version=1.2.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
This is good practice as all our iterators will never return a value
after reaching `None`
The benefit should be minimal as only `Iterator::fuse` is directly
specialized and itself rarely used (sometimes in `itertools` adaptors)
Thus it is mostly a documentation thing
# Description
When a closure if provided to `group-by`, errors that occur in the
closure are currently ignored. That is, `group-by` will fall back and
use the `"error"` key if an error occurs. For example, the code snippet
below will group all `ls` entries under the `"error"` column.
```nushell
ls | group-by { get nope }
```
This PR changes `group-by` to instead bubble up any errors triggered
inside the closure. In addition, this PR also does some refactoring and
cleanup inside `group-by`.
# User-Facing Changes
Errors are now returned from the closure provided to `group-by` instead
of falling back to the `"error"` group/key.
# Description
If a panic happens during a plugin call, because it always happens
outside of the main thread, it currently just hangs Nushell because the
plugin stays running without ever producing a response to the call.
This adds a panic handler that calls `exit(1)` after the unwind finishes
to the plugin runner. The panic error is still printed to stderr as
always, and waiting for the unwind to finish helps to ensure that
anything on the stack with `Drop` behavior that needed to run still
runs, at least on that thread.
# User-Facing Changes
Panics now look like this, which is what they looked like before the
plugin behavior was moved to a separate thread:
```
thread 'plugin runner (primary)' panicked at crates/nu_plugin_example/src/commands/main.rs:45:9:
Test panic
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Error: nu:🐚:plugin_failed_to_decode
× Plugin failed to decode: Failed to receive response to plugin call
```
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# Description
A little refactor that use `working_set.error` rather than
`working_set.parse_errors.push`, which is reported here:
https://github.com/nushell/nushell/pull/12238
> Inconsistent error reporting. Usage of both working_set.error() and
working_set.parse_errors.push(). Using ParseError::Expected for an
invalid variable name when there's ParseError::VariableNotValid (from
parser.rs:5237). Checking variable names manually when there's
is_variable() (from parser.rs:2905).
# User-Facing Changes
NaN
# Tests + Formatting
Done
# Description
Remove a couple of legacy fields (`input_type`, `output_type`), and
`var_id` which is optional and not required for deserialization.
I think until I document this in the plugin protocol ref, most people
will probably be using this example to get started, so it should be as
correct as possible
# After Submitting
- [ ] TODO: document `Signature` in plugin protocol reference
# Description
This is just some cleanup. I moved to_pipeline_data and to_cache_value
to the CustomValueSupport trait, where I should've put them to begin
with.
Co-authored-by: Jack Wright <jack.wright@disqo.com>
# Description
Work for #7149
- **Error `with-env` given uneven count in list form**
- **Fix `with-env` `CantConvert` to record**
- **Error `with-env` when given protected env vars**
- **Deprecate list/table input of vars to `with-env`**
- **Remove examples for deprecated input**
# User-Facing Changes
## Deprecation of the following forms
```
> with-env [MYENV "my env value"] { $env.MYENV }
my env value
> with-env [X Y W Z] { $env.X }
Y
> with-env [[X W]; [Y Z]] { $env.W }
Z
```
## recommended standardized form
```
# Set by key-value record
> with-env {X: "Y", W: "Z"} { [$env.X $env.W] }
╭───┬───╮
│ 0 │ Y │
│ 1 │ Z │
╰───┴───╯
```
## (Side effect) Repeated definitions in an env shorthand are now
disallowed
```
> FOO=bar FOO=baz $env
Error: nu:🐚:column_defined_twice
× Record field or table column used twice: FOO
╭─[entry #1:1:1]
1 │ FOO=bar FOO=baz $env
· ─┬─ ─┬─
· │ ╰── field redefined here
· ╰── field first defined here
╰────
```
# Description
@maxim-uvarov discovered the following error:
```
> [[a b]; [6 2] [1 4] [4 1]] | polars into-lazy | polars sort-by a | polars unique --subset [a]
Error: × Error using as series
╭─[entry #1:1:68]
1 │ [[a b]; [6 2] [1 4] [4 1]] | polars into-lazy | polars sort-by a | polars unique --subset [a]
· ──────┬──────
· ╰── dataframe has more than one column
╰────
```
During investigation, I discovered the root cause was that the lazy frame was incorrectly converted back to a eager dataframe. In order to keep this from happening, I explicitly set that the dataframe did not come from an eager frame. This causes the conversion logic to not attempt to convert the dataframe later in the pipeline.
---------
Co-authored-by: Jack Wright <jack.wright@disqo.com>
# Description
Fixes#12520
# User-Facing Changes
Breaking change:
Any operation parsing input with `PWD` to set the environment will now
fail with `ShellError::AutomaticEnvVarSetManually`
Furthermore transactions containing the special env-vars will be
rejected before executing any modifications. Prevoiusly this was
changing valid variables before while leaving valid variables after the
violation untouched.
## `PWD` handling.
Now failing
```
{PWD: "/trolling"} | load-env
```
already failing
```
load-env {PWD: "/trolling"}
```
## Error management
```
> load-env {MY_VAR1: foo, PWD: "/trolling", MY_VAR2: bar}
Error: nu:🐚:automatic_env_var_set_manually
× PWD cannot be set manually.
╭─[entry #1:1:2]
1 │ load-env {MY_VAR1: foo, PWD: "/trolling", MY_VAR2: bar}
· ────┬───
· ╰── cannot set 'PWD' manually
╰────
help: The environment variable 'PWD' is set automatically by Nushell and cannot be set manually.
```
### Before:
```
> $env.MY_VAR1
foo
> $env.MY_VAR2
Error: nu:🐚:name_not_found
....
```
### After:
```
> $env.MY_VAR1
Error: nu:🐚:name_not_found
....
> $env.MY_VAR2
Error: nu:🐚:name_not_found
....
```
# After Submitting
We need to check if any integrations rely on this hack.
# Description
Adds support for running plugins using local socket communication
instead of stdio. This will be an optional thing that not all plugins
have to support.
This frees up stdio for use to make plugins that use stdio to create
terminal UIs, cc @amtoine, @fdncred.
This uses the [`interprocess`](https://crates.io/crates/interprocess)
crate (298 stars, MIT license, actively maintained), which seems to be
the best option for cross-platform local socket support in Rust. On
Windows, a local socket name is provided. On Unixes, it's a path. The
socket name is kept to a relatively small size because some operating
systems have pretty strict limits on the whole path (~100 chars), so on
macOS for example we prefer `/tmp/nu.{pid}.{hash64}.sock` where the hash
includes the plugin filename and timestamp to be unique enough.
This also adds an API for moving plugins in and out of the foreground
group, which is relevant for Unixes where direct terminal control
depends on that.
TODO:
- [x] Generate local socket path according to OS conventions
- [x] Add support for passing `--local-socket` to the plugin executable
instead of `--stdio`, and communicating over that instead
- [x] Test plugins that were broken, including
[amtoine/nu_plugin_explore](https://github.com/amtoine/nu_plugin_explore)
- [x] Automatically upgrade to using local sockets when supported,
falling back if it doesn't work, transparently to the user without any
visible error messages
- Added protocol feature: `LocalSocket`
- [x] Reset preferred mode to `None` on `register`
- [x] Allow plugins to detect whether they're running on a local socket
and can use stdio freely, so that TUI plugins can just produce an error
message otherwise
- Implemented via `EngineInterface::is_using_stdio()`
- [x] Clean up foreground state when plugin command exits on the engine
side too, not just whole plugin
- [x] Make sure tests for failure cases work as intended
- `nu_plugin_stress_internals` added
# User-Facing Changes
- TUI plugins work
- Non-Rust plugins could optionally choose to use this
- This might behave differently, so will need to test it carefully
across different operating systems
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
- [ ] Document local socket option in plugin contrib docs
- [ ] Document how to do a terminal UI plugin in plugin contrib docs
- [ ] Document: `EnterForeground` engine call
- [ ] Document: `LeaveForeground` engine call
- [ ] Document: `LocalSocket` protocol feature
# Description
In the plugin protocol, I had used `#[serde(untagged)]` on the `Stream`
variant to make it smaller and include all of the stream messages at the
top level, but unfortunately this causes serde to make really unhelpful
errors if anything fails to decode anywhere:
```
Error: nu:🐚:plugin_failed_to_decode
× Plugin failed to decode: data did not match any variant of untagged enum PluginOutput
```
If you are trying to develop something using the plugin protocol
directly, this error is incredibly unhelpful. Even as a user, this
basically just says 'something is wrong'. With this change, the errors
are much better:
```
Error: nu:🐚:plugin_failed_to_decode
× Plugin failed to decode: unknown variant `PipelineDatra`, expected one of `Error`, `Signature`, `Ordering`, `PipelineData` at line 2 column 37
```
The only downside is it means I have to duplicate all of the
`StreamMessage` variants manually, but there's only 4 of them and
they're small.
This doesn't actually change the protocol at all - everything is still
identical on the wire.
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# Description
When starting the LSP server, the configuration file and environment
file are used to configure the LSP engine unless --no-config-file is
provided.
This PR provides an improvement that is related to #10794
CC: @fdncred
# Description
This adds a `SharedCow` type as a transparent copy-on-write pointer that
clones to unique on mutate.
As an initial test, the `Record` within `Value::Record` is shared.
There are some pretty big wins for performance. I'll post benchmark
results in a comment. The biggest winner is nested access, as that would
have cloned the records for each cell path follow before and it doesn't
have to anymore.
The reusability of the `SharedCow` type is nice and I think it could be
used to clean up the previous work I did with `Arc` in `EngineState`.
It's meant to be a mostly transparent clone-on-write that just clones on
`.to_mut()` or `.into_owned()` if there are actually multiple
references, but avoids cloning if the reference is unique.
# User-Facing Changes
- `Value::Record` field is a different type (plugin authors)
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
- [ ] use for `EngineState`
- [ ] use for `Value::List`
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx
you can also mention related issues, PRs or discussions!
-->
* Fixes#12482
* Initial PR failed due to CI issues at the time. Subsequent rebase
failed, so creating new PR.
# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.
Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
Use https://github.com/ndjson/ndjson-spec for help links instead of
former spam site
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
Link changed for `help to ndjson` and `help from ndjson`.
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
# Description
Added a method for getting the base value for a PluginCustomValue.
cc: @devyn
---------
Co-authored-by: Jack Wright <jack.wright@disqo.com>
# Description
From @maxim-uvarov's
[post](https://discord.com/channels/601130461678272522/1227612017171501136/1228656319704203375).
When calling `to-lazy` back to back in a pipeline, an error should not
occur:
```
> [[a b]; [6 2] [1 4] [4 1]] | polars into-lazy | polars into-lazy
Error: nu:🐚:cant_convert
× Can't convert to NuDataFrame.
╭─[entry #1:1:30]
1 │ [[a b]; [6 2] [1 4] [4 1]] | polars into-lazy | polars into-lazy
· ────────┬───────
· ╰── can't convert NuLazyFrameCustomValue to NuDataFrame
╰────
```
This pull request ensures that custom value's of NuLazyFrameCustomValue are properly converted when passed in.
Co-authored-by: Jack Wright <jack.wright@disqo.com>