# Description
On 64-bit platforms the current size of `Value` is 56 bytes. The
limiting variants were `Closure` and `Range`. Boxing the two reduces the
size of Value to 48 bytes. This is the minimal size possible with our
current 16-byte `Span` and any 24-byte `Vec` container which we use in
several variants. (Note the extra full 8-bytes necessary for the
discriminant or other smaller values due to the 8-byte alignment of
`usize`)
This is leads to a size reduction of ~15% for `Value` and should overall
be beneficial as both `Range` and `Closure` are rarely used compared to
the primitive types or even our general container types.
# User-Facing Changes
Less memory used, potential runtime benefits.
(Too late in the evening to run the benchmarks myself right now)
# Description
Refactors `describe` a bit. Namely, I added a `Description` enum to get
rid of `compact_primitive_description` and its awkward `Value` pattern
matching.
# Description
Does some misc changes to `ListStream`:
- Moves it into its own module/file separate from `RawStream`.
- `ListStream`s now have an associated `Span`.
- This required changes to `ListStreamInfo` in `nu-plugin`. Note sure if
this is a breaking change for the plugin protocol.
- Hides the internals of `ListStream` but also adds a few more methods.
- This includes two functions to more easily alter a stream (these take
a `ListStream` and return a `ListStream` instead of having to go through
the whole `into_pipeline_data(..)` route).
- `map`: takes a `FnMut(Value) -> Value`
- `modify`: takes a function to modify the inner stream.
# Description
Removes lazy records from the language, following from the reasons
outlined in #12622. Namely, this should make semantics more clear and
will eliminate concerns regarding maintainability.
# User-Facing Changes
- Breaking change: `lazy make` is removed.
- Breaking change: `describe --collect-lazyrecords` flag is removed.
- `sys` and `debug info` now return regular records.
# After Submitting
- Update nushell book if necessary.
- Explore new `sys` and `debug info` APIs to prevent them from taking
too long (e.g., subcommands or taking an optional column/cell-path
argument).
# Description
Bandaid fix for #12643, where it is not possible to get the exit code of
a failed external command while also having the external command inherit
nushell's stdout and stderr. This changes `try` so that the exit code of
external command is available in the `catch` block via the usual
`$env.LAST_EXIT_CODE`.
# Tests + Formatting
Added one test.
# After Submitting
Rework I/O redirection and possibly exit codes.
# Description
This PR does miscellaneous cleanup in some of the commands from
`nu-cmd-lang`.
# User-Facing Changes
None.
# After Submitting
Cleanup the other commands in `nu-cmd-lang`.
# Description
In this week's nushell meeting, we decided to go ahead with #12622 and
remove lazy records in 0.94.0. For 0.93.0, we will only deprecate `lazy
make`, and so this PR makes `lazy make` print a deprecation warning.
# User-Facing Changes
None, besides the deprecation warning.
# After Submitting
Remove lazy records.
# Description
Continuing from #12568, this PR further reduces the size of `Expr` from
64 to 40 bytes. It also reduces `Expression` from 128 to 96 bytes and
`Type` from 32 to 24 bytes.
This was accomplished by:
- for `Expr` with multiple fields (e.g., `Expr::Thing(A, B, C)`),
merging the fields into new AST struct types and then boxing this struct
(e.g. `Expr::Thing(Box<ABC>)`).
- replacing `Vec<T>` with `Box<[T]>` in multiple places. `Expr`s and
`Expression`s should rarely be mutated, if at all, so this optimization
makes sense.
By reducing the size of these types, I didn't notice a large performance
improvement (at least compared to #12568). But this PR does reduce the
memory usage of nushell. My config is somewhat light so I only noticed a
difference of 1.4MiB (38.9MiB vs 37.5MiB).
---------
Co-authored-by: Stefan Holderbach <sholderbach@users.noreply.github.com>
# Description
This adds an extension trait to `Result` that wraps errors in `Spanned`,
saving the effort of calling `.map_err(|err| err.into_spanned(span))`
every time. This will hopefully make it even more likely that someone
will want to use a spanned `io::Error` and make it easier to remove the
impl for `From<io::Error> for ShellError` because that doesn't have span
information.
# Description
- Plugin signatures are now saved to `plugin.msgpackz`, which is
brotli-compressed MessagePack.
- The file is updated incrementally, rather than writing all plugin
commands in the engine every time.
- The file always contains the result of the `Signature` call to the
plugin, even if commands were removed.
- Invalid data for a particular plugin just causes an error to be
reported, but the rest of the plugins can still be parsed
# User-Facing Changes
- The plugin file has a different filename, and it's not a nushell
script.
- The default `plugin.nu` file will be automatically migrated the first
time, but not other plugin config files.
- We don't currently provide any utilities that could help edit this
file, beyond `plugin add` and `plugin rm`
- `from msgpackz`, `to msgpackz` could also help
- New commands: `plugin add`, `plugin rm`
# Tests + Formatting
Tests added for the format and for the invalid handling.
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
- [ ] Check for documentation changes
- [ ] Definitely needs release notes
# Description
`Value` describes the types of first-class values that users and scripts
can create, manipulate, pass around, and store. However, `Block`s are
not first-class values in the language, so this PR removes it from
`Value`. This removes some unnecessary code, and this change should be
invisible to the user except for the change to `scope modules` described
below.
# User-Facing Changes
Breaking change: the output of `scope modules` was changed so that
`env_block` is now `has_env_block` which is a boolean value instead of a
`Block`.
# After Submitting
Update the language guide possibly.
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx
you can also mention related issues, PRs or discussions!
-->
Closes#12561
# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.
Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
Add version details in `version`'s output. The intended use is for
third-party tools to be able to quickly check version numbers without
having to the parsing from `(version).version` or `$env.NU_VERSION`.
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
This adds 4 new values to the record from `version`:
```
...
│ major │ 0 │
│ minor │ 92 │
│ patch │ 3 │
│ pre │ a-value │
...
```
`pre` is optional and won't be present most of the time I think.
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
I ran the new command using `cargo run -- -c version`:
```
╭────────────────────┬─────────────────────────────────────────────────╮
│ version │ 0.92.3 │
│ major │ 0 │
│ minor │ 92 │
│ patch │ 3 │
│ branch │ │
│ commit_hash │ │
│ build_os │ macos-aarch64 │
│ build_target │ aarch64-apple-darwin │
│ rust_version │ rustc 1.77.2 (25ef9e3d8 2024-04-09) │
│ rust_channel │ 1.77.2-aarch64-apple-darwin │
│ cargo_version │ cargo 1.77.2 (e52e36006 2024-03-26) │
│ build_time │ 2024-04-20 15:09:36 +02:00 │
│ build_rust_channel │ release │
│ allocator │ mimalloc │
│ features │ default, sqlite, system-clipboard, trash, which │
│ installed_plugins │ │
╰────────────────────┴─────────────────────────────────────────────────╯
```
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
After this is merged I would like to write docs somewhere for scripts
writer to advise using these members instead of the string values. Where
should I put said docs ?
# Description
Remove unused/effect-less features, make sure we show all relevant
features in `version`
# User-Facing Changes
- **Remove unused feature `wasi`**
- will cause failure to build should you enable it. Otherwise no effect
- **Include feat `system-clipboard` in `version`**
# Description
This adds a `SharedCow` type as a transparent copy-on-write pointer that
clones to unique on mutate.
As an initial test, the `Record` within `Value::Record` is shared.
There are some pretty big wins for performance. I'll post benchmark
results in a comment. The biggest winner is nested access, as that would
have cloned the records for each cell path follow before and it doesn't
have to anymore.
The reusability of the `SharedCow` type is nice and I think it could be
used to clean up the previous work I did with `Arc` in `EngineState`.
It's meant to be a mostly transparent clone-on-write that just clones on
`.to_mut()` or `.into_owned()` if there are actually multiple
references, but avoids cloning if the reference is unique.
# User-Facing Changes
- `Value::Record` field is a different type (plugin authors)
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
- [ ] use for `EngineState`
- [ ] use for `Value::List`
# Description
I spent a while trying to come up with a good name for what is currently
`IoStream`. Looking back, this name is not the best, because it:
1. Implies that it is a stream, when it all it really does is specify
the output destination for a stream/pipeline.
2. Implies that it handles input and output, when it really only handles
output.
So, this PR renames `IoStream` to `OutDest` instead, which should be
more clear.
# Description
Edits the `echo` help text to mention the `print` command.
---------
Co-authored-by: Darren Schroeder <343840+fdncred@users.noreply.github.com>
# Description
Currently, `Range` is a struct with a `from`, `to`, and `incr` field,
which are all type `Value`. This PR changes `Range` to be an enum over
`IntRange` and `FloatRange` for better type safety / stronger compile
time guarantees.
Fixes: #11778Fixes: #11777Fixes: #11776Fixes: #11775Fixes: #11774Fixes: #11773Fixes: #11769.
# User-Facing Changes
Hopefully none, besides bug fixes.
Although, the `serde` representation might have changed.
- [x] `cargo hack` feature flag compatibility run
- [x] reedline released and pinned
- [x] `nu-plugin-test-support` added to release script
- [x] dependency tree checked
- [x] release notes
# Description
The second `Value` is redundant and will consume five extra bytes on
each transmission of a custom value to/from a plugin.
# User-Facing Changes
This is a breaking change to the plugin protocol.
The [example in the protocol
reference](https://www.nushell.sh/contributor-book/plugin_protocol_reference.html#value)
becomes
```json
{
"Custom": {
"val": {
"type": "PluginCustomValue",
"name": "database",
"data": [36, 190, 127, 40, 12, 3, 46, 83],
"notify_on_drop": true
},
"span": {
"start": 320,
"end": 340
}
}
}
```
instead of
```json
{
"CustomValue": {
...
}
}
```
# After Submitting
Update plugin protocol reference
# Description
When implementing a `Command`, one must also import all the types
present in the function signatures for `Command`. This makes it so that
we often import the same set of types in each command implementation
file. E.g., something like this:
```rust
use nu_protocol::ast::Call;
use nu_protocol::engine::{Command, EngineState, Stack};
use nu_protocol::{
record, Category, Example, IntoInterruptiblePipelineData, IntoPipelineData, PipelineData,
ShellError, Signature, Span, Type, Value,
};
```
This PR adds the `nu_engine::command_prelude` module which contains the
necessary and commonly used types to implement a `Command`:
```rust
// command_prelude.rs
pub use crate::CallExt;
pub use nu_protocol::{
ast::{Call, CellPath},
engine::{Command, EngineState, Stack},
record, Category, Example, IntoInterruptiblePipelineData, IntoPipelineData, IntoSpanned,
PipelineData, Record, ShellError, Signature, Span, Spanned, SyntaxShape, Type, Value,
};
```
This should reduce the boilerplate needed to implement a command and
also gives us a place to track the breadth of the `Command` API. I tried
to be conservative with what went into the prelude modules, since it
might be hard/annoying to remove items from the prelude in the future.
Let me know if something should be included or excluded.
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx
you can also mention related issues, PRs or discussions!
-->
# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.
Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
Boxes `Record` inside `Value` to reduce memory usage, `Value` goes from
`72` -> `56` bytes after this change.
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
# Description
This makes `LabeledError` much more capable of representing close to
everything a `miette::Diagnostic` can, including `ShellError`, and
allows plugins to generate multiple error spans, codes, help, etc.
`LabeledError` is now embeddable within `ShellError` as a transparent
variant.
This could also be used to improve `error make` and `try/catch` to
reflect `LabeledError` exactly in the future.
Also cleaned up some errors in existing plugins.
# User-Facing Changes
Breaking change for plugins. Nicer errors for users.
# Description
This makes many of the larger objects in `EngineState` into `Arc`, and
uses `Arc::make_mut` to do clone-on-write if the reference is not
unique. This is generally very cheap, giving us the best of both worlds
- allowing us to mutate without cloning if we have an exclusive
reference, and cloning if we don't.
This started as more of a curiosity for me after remembering that
`Arc::make_mut` exists and can make using `Arc` for mostly immutable
data that sometimes needs to be changed very convenient, and also after
hearing someone complain about memory usage on Discord - this is a
somewhat significant win for that.
The exact objects that were wrapped in `Arc`:
- `files`, `file_contents` - the strings and byte buffers
- `decls` - the whole `Vec`, but mostly to avoid lots of individual
`malloc()` calls on Clone rather than for memory usage
- `blocks` - the blocks themselves, rather than the outer Vec
- `modules` - the modules themselves, rather than the outer Vec
- `env_vars`, `previous_env_vars` - the entire maps
- `config`
The changes required were relatively minimal, but this is a breaking API
change. In particular, blocks are added as Arcs, to allow the parser
cache functionality to work.
With my normal nu config, running on Linux, this saves me about 15 MiB
of process memory usage when running interactively (65 MiB → 50 MiB).
This also makes quick command executions cheaper, particularly since
every REPL loop now involves a clone of the engine state so that we can
recover from a panic. It also reduces memory usage where engine state
needs to be cloned and sent to another thread or kept within an
iterator.
# User-Facing Changes
Shouldn't be any, since it's all internal stuff, but it does change some
public interfaces so it's a breaking change
[Context on
Discord](https://discord.com/channels/601130461678272522/855947301380947968/1219425984990806207)
# Description
- Rename `CustomValue::value_string()` to `type_name()` to reflect its
usage better.
- Change print behavior to always call `to_base_value()` first, to give
the custom value better control over the output.
- Change `describe --detailed` to show the type name as the subtype,
rather than trying to describe the base value.
- Change custom `Type` to use `type_name()` rather than `typetag_name()`
to make things like `PluginCustomValue` more transparent
One question: should `describe --detailed` still include a description
of the base value somewhere? I'm torn on it, it seems possibly useful
for some things (maybe sqlite databases?), but having `describe -d` not
include the custom type name anywhere felt weird. Another option would
be to add another method to `CustomValue` for info to be displayed in
`describe`, so that it can be more type-specific?
# User-Facing Changes
Everything above has implications for printing and `describe` on custom
values
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# Description
Fixes#12057 where it was pointed out that `export use` takes an
**optional** `members` positional argument whereas `use` takes a
**rest** `members` argument.
# Description
The PR overhauls how IO redirection is handled, allowing more explicit
and fine-grain control over `stdout` and `stderr` output as well as more
efficient IO and piping.
To summarize the changes in this PR:
- Added a new `IoStream` type to indicate the intended destination for a
pipeline element's `stdout` and `stderr`.
- The `stdout` and `stderr` `IoStream`s are stored in the `Stack` and to
avoid adding 6 additional arguments to every eval function and
`Command::run`. The `stdout` and `stderr` streams can be temporarily
overwritten through functions on `Stack` and these functions will return
a guard that restores the original `stdout` and `stderr` when dropped.
- In the AST, redirections are now directly part of a `PipelineElement`
as a `Option<Redirection>` field instead of having multiple different
`PipelineElement` enum variants for each kind of redirection. This
required changes to the parser, mainly in `lite_parser.rs`.
- `Command`s can also set a `IoStream` override/redirection which will
apply to the previous command in the pipeline. This is used, for
example, in `ignore` to allow the previous external command to have its
stdout redirected to `Stdio::null()` at spawn time. In contrast, the
current implementation has to create an os pipe and manually consume the
output on nushell's side. File and pipe redirections (`o>`, `e>`, `e>|`,
etc.) have precedence over overrides from commands.
This PR improves piping and IO speed, partially addressing #10763. Using
the `throughput` command from that issue, this PR gives the following
speedup on my setup for the commands below:
| Command | Before (MB/s) | After (MB/s) | Bash (MB/s) |
| --------------------------- | -------------:| ------------:|
-----------:|
| `throughput o> /dev/null` | 1169 | 52938 | 54305 |
| `throughput \| ignore` | 840 | 55438 | N/A |
| `throughput \| null` | Error | 53617 | N/A |
| `throughput \| rg 'x'` | 1165 | 3049 | 3736 |
| `(throughput) \| rg 'x'` | 810 | 3085 | 3815 |
(Numbers above are the median samples for throughput)
This PR also paves the way to refactor our `ExternalStream` handling in
the various commands. For example, this PR already fixes the following
code:
```nushell
^sh -c 'echo -n "hello "; sleep 0; echo "world"' | find "hello world"
```
This returns an empty list on 0.90.1 and returns a highlighted "hello
world" on this PR.
Since the `stdout` and `stderr` `IoStream`s are available to commands
when they are run, then this unlocks the potential for more convenient
behavior. E.g., the `find` command can disable its ansi highlighting if
it detects that the output `IoStream` is not the terminal. Knowing the
output streams will also allow background job output to be redirected
more easily and efficiently.
# User-Facing Changes
- External commands returned from closures will be collected (in most
cases):
```nushell
1..2 | each {|_| nu -c "print a" }
```
This gives `["a", "a"]` on this PR, whereas this used to print "a\na\n"
and then return an empty list.
```nushell
1..2 | each {|_| nu -c "print -e a" }
```
This gives `["", ""]` and prints "a\na\n" to stderr, whereas this used
to return an empty list and print "a\na\n" to stderr.
- Trailing new lines are always trimmed for external commands when
piping into internal commands or collecting it as a value. (Failure to
decode the output as utf-8 will keep the trailing newline for the last
binary value.) In the current nushell version, the following three code
snippets differ only in parenthesis placement, but they all also have
different outputs:
1. `1..2 | each { ^echo a }`
```
a
a
╭────────────╮
│ empty list │
╰────────────╯
```
2. `1..2 | each { (^echo a) }`
```
╭───┬───╮
│ 0 │ a │
│ 1 │ a │
╰───┴───╯
```
3. `1..2 | (each { ^echo a })`
```
╭───┬───╮
│ 0 │ a │
│ │ │
│ 1 │ a │
│ │ │
╰───┴───╯
```
But in this PR, the above snippets will all have the same output:
```
╭───┬───╮
│ 0 │ a │
│ 1 │ a │
╰───┴───╯
```
- All existing flags on `run-external` are now deprecated.
- File redirections now apply to all commands inside a code block:
```nushell
(nu -c "print -e a"; nu -c "print -e b") e> test.out
```
This gives "a\nb\n" in `test.out` and prints nothing. The same result
would happen when printing to stdout and using a `o>` file redirection.
- External command output will (almost) never be ignored, and ignoring
output must be explicit now:
```nushell
(^echo a; ^echo b)
```
This prints "a\nb\n", whereas this used to print only "b\n". This only
applies to external commands; values and internal commands not in return
position will not print anything (e.g., `(echo a; echo b)` still only
prints "b").
- `complete` now always captures stderr (`do` is not necessary).
# After Submitting
The language guide and other documentation will need to be updated.
# Description
Fixes: #11287Fixes: #11318
It's implemented by porting the similar logic in `eval_call`, I've tried
to reduce duplicate code, but it seems that it's hard without using
macros.
3ee2fc60f9/crates/nu-engine/src/eval.rs (L60-L130)
It only works for `do` command.
# User-Facing Changes
## Closure supports optional parameter
```nushell
let code = {|x?| print ($x | default "i'm the default")}
do $code
```
Previously it raises an error, after this change, it prints `i'm the
default`.
## Closure supports type checking
```nushell
let code = {|x: int| echo $x}
do $code "aa"
```
After this change, it will raise an error with a message: `can't convert
string to int`
# Tests + Formatting
Done
# After Submitting
NaN
# Description
The intended effect of the `extra` feature has been undermined by
introducing the full builds on our release pages and having more
activity on some of the extra commands.
To simplify the feature matrix let's get rid of it and focus our effort
on truly either refining a command to well-specified behavior or
discarding it entirely from the `nu` binary and moving it into plugins.
## Details
- Remove `--features extra` from CI
- Don't explicitly name `extra` in full build wf
- Remove feature extra from build-help scripts
- Update README in `nu-cmd-extra`
- Remove feature `extra`
- Fix previously dead `format pattern` tests
- Relax signature of `to html`
- Fix/ignore `html::test_no_color_flag`
- Remove dead features from `version`
- Refine `to html` type signature
# User-Facing Changes
The commands that were previously only available when building with
`--features extra` will now be available to everyone. This increases the
number of dependencies slightly but has a limited impact on the overall
binary size.
# Tests + Formatting
Some tests that were left in `nu-command` during cratification were dead
because the feature was not passed to `nu-command` and only to
`nu-cmd-lang` for feature-flag mention in `version`.
Those tests have now been either fixed or ignored in one case.
# After Submitting
There may be places in the documentation where we point to `--features
extra` that will now be moot (apart from the generated command help)
# Description
This PR uses the new plugin protocol to intelligently keep plugin
processes running in the background for further plugin calls.
Running plugins can be seen by running the new `plugin list` command,
and stopped by running the new `plugin stop` command.
This is an enhancement for the performance of plugins, as starting new
plugin processes has overhead, especially for plugins in languages that
take a significant amount of time on startup. It also enables plugins
that have persistent state between commands, making the migration of
features like dataframes and `stor` to plugins possible.
Plugins are automatically stopped by the new plugin garbage collector,
configurable with `$env.config.plugin_gc`:
```nushell
$env.config.plugin_gc = {
# Configuration for plugin garbage collection
default: {
enabled: true # true to enable stopping of inactive plugins
stop_after: 10sec # how long to wait after a plugin is inactive to stop it
}
plugins: {
# alternate configuration for specific plugins, by name, for example:
#
# gstat: {
# enabled: false
# }
}
}
```
If garbage collection is enabled, plugins will be stopped after
`stop_after` passes after they were last active. Plugins are counted as
inactive if they have no running plugin calls. Reading the stream from
the response of a plugin call is still considered to be activity, but if
a plugin holds on to a stream but the call ends without an active
streaming response, it is not counted as active even if it is reading
it. Plugins can explicitly disable the GC as appropriate with
`engine.set_gc_disabled(true)`.
The `version` command now lists plugin names rather than plugin
commands. The list of plugin commands is accessible via `plugin list`.
Recommend doing this together with #12029, because it will likely force
plugin developers to do the right thing with mutability and lead to less
unexpected behavior when running plugins nested / in parallel.
# User-Facing Changes
- new command: `plugin list`
- new command: `plugin stop`
- changed command: `version` (now lists plugin names, rather than
commands)
- new config: `$env.config.plugin_gc`
- Plugins will keep running and be reused, at least for the configured
GC period
- Plugins that used mutable state in weird ways like `inc` did might
misbehave until fixed
- Plugins can disable GC if they need to
- Had to change plugin signature to accept `&EngineInterface` so that
the GC disable feature works. #12029 does this anyway, and I'm expecting
(resolvable) conflicts with that
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
Because there is some specific OS behavior required for plugins to not
respond to Ctrl-C directly, I've developed against and tested on both
Linux and Windows to ensure that works properly.
# After Submitting
I think this probably needs to be in the book somewhere
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx
you can also mention related issues, PRs or discussions!
-->
# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.
Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
This PR adds a new evaluator path with callbacks to a mutable trait
object implementing a Debugger trait. The trait object can do anything,
e.g., profiling, code coverage, step debugging. Currently,
entering/leaving a block and a pipeline element is marked with
callbacks, but more callbacks can be added as necessary. Not all
callbacks need to be used by all debuggers; unused ones are simply empty
calls. A simple profiler is implemented as a proof of concept.
The debugging support is implementing by making `eval_xxx()` functions
generic depending on whether we're debugging or not. This has zero
computational overhead, but makes the binary slightly larger (see
benchmarks below). `eval_xxx()` variants called from commands (like
`eval_block_with_early_return()` in `each`) are chosen with a dynamic
dispatch for two reasons: to not grow the binary size due to duplicating
the code of many commands, and for the fact that it isn't possible
because it would make Command trait objects object-unsafe.
In the future, I hope it will be possible to allow plugin callbacks such
that users would be able to implement their profiler plugins instead of
having to recompile Nushell.
[DAP](https://microsoft.github.io/debug-adapter-protocol/) would also be
interesting to explore.
Try `help debug profile`.
## Screenshots
Basic output:
![profiler_new](https://github.com/nushell/nushell/assets/25571562/418b9df0-b659-4dcb-b023-2d5fcef2c865)
To profile with more granularity, increase the profiler depth (you'll
see that repeated `is-windows` calls take a large chunk of total time,
making it a good candidate for optimizing):
![profiler_new_m3](https://github.com/nushell/nushell/assets/25571562/636d756d-5d56-460c-a372-14716f65f37f)
## Benchmarks
### Binary size
Binary size increase vs. main: **+40360 bytes**. _(Both built with
`--release --features=extra,dataframe`.)_
### Time
```nushell
# bench_debug.nu
use std bench
let test = {
1..100
| each {
ls | each {|row| $row.name | str length }
}
| flatten
| math avg
}
print 'debug:'
let res2 = bench { debug profile $test } --pretty
print $res2
```
```nushell
# bench_nodebug.nu
use std bench
let test = {
1..100
| each {
ls | each {|row| $row.name | str length }
}
| flatten
| math avg
}
print 'no debug:'
let res1 = bench { do $test } --pretty
print $res1
```
`cargo run --release -- bench_debug.nu` is consistently 1--2 ms slower
than `cargo run --release -- bench_nodebug.nu` due to the collection
overhead + gathering the report. This is expected. When gathering more
stuff, the overhead is obviously higher.
`cargo run --release -- bench_nodebug.nu` vs. `nu bench_nodebug.nu` I
didn't measure any difference. Both benchmarks report times between 97
and 103 ms randomly, without one being consistently higher than the
other. This suggests that at least in this particular case, when not
running any debugger, there is no runtime overhead.
## API changes
This PR adds a generic parameter to all `eval_xxx` functions that forces
you to specify whether you use the debugger. You can resolve it in two
ways:
* Use a provided helper that will figure it out for you. If you wanted
to use `eval_block(&engine_state, ...)`, call `let eval_block =
get_eval_block(&engine_state); eval_block(&engine_state, ...)`
* If you know you're in an evaluation path that doesn't need debugger
support, call `eval_block::<WithoutDebug>(&engine_state, ...)` (this is
the case of hooks, for example).
I tried to add more explanation in the docstring of `debugger_trait.rs`.
## TODO
- [x] Better profiler output to reduce spam of iterative commands like
`each`
- [x] Resolve `TODO: DEBUG` comments
- [x] Resolve unwraps
- [x] Add doc comments
- [x] Add usage and extra usage for `debug profile`, explaining all
columns
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
Hopefully none.
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
# Description
Change the `ignore` command to use `drain()` instead of collecting a
value.
This saves memory usage when piping a lot of output to `ignore`. There's
no reason to keep the output in memory if it's going to be discarded
anyway.
# User-Facing Changes
Probably none
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# Description
Replace panics with errors in thread spawning.
Also adds `IntoSpanned` trait for easily constructing `Spanned`, and an
implementation of `From<Spanned<std::io::Error>>` for `ShellError`,
which is used to provide context for the error wherever there was a span
conveniently available. In general this should make it more convenient
to do the right thing with `std::io::Error` and always add a span to it
when it's possible to do so.
# User-Facing Changes
Fewer panics!
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# Description
As title, currently on latest main, nushell confused user if it allows
implicit casting between glob and string:
```nushell
let x = "*.txt"
def glob-test [g: glob] { open $g }
glob-test $x
```
It always expand the glob although `$x` is defined as a string.
This pr implements a solution from @kubouch :
> We could make it really strict and disallow all autocasting between
globs and strings because that's what's causing the "magic" confusion.
Then, modify all builtins that accept globs to accept oneof(glob,
string) and the rules would be that globs always expand and strings
never expand
# User-Facing Changes
After this pr, user needs to use `into glob` to invoke `glob-test`, if
user pass a string variable:
```nushell
let x = "*.txt"
def glob-test [g: glob] { open $g }
glob-test ($x | into glob)
```
Or else nushell will return an error.
```
3 │ glob-test $x
· ─┬
· ╰── can't convert string to glob
```
# Tests + Formatting
Done
# After Submitting
Nan
Avoid unnecessary allocations or larger iterator structs
- Turn static `Vec`s into arrays when possible
- Use `std::iter::once`/`empty` where applicable
- Use `bool::then_some` in `detect column` `.chain`
- Drop in the bucket: de-vec-ing tests
# Description
This is a follow up to
https://github.com/nushell/nushell/pull/11621#issuecomment-1937484322
Also Fixes: #11838
## About the code change
It applys the same logic when we pass variables to external commands:
0487e9ffcb/crates/nu-command/src/system/run_external.rs (L162-L170)
That is: if user input dynamic things(like variables, sub-expression, or
string interpolation), it returns a quoted `NuPath`, then user input
won't be globbed
# User-Facing Changes
Given two input files: `a*c.txt`, `abc.txt`
* `let f = "a*c.txt"; rm $f` will remove one file: `a*c.txt`.
~* `let f = "a*c.txt"; rm --glob $f` will remove `a*c.txt` and
`abc.txt`~
* `let f: glob = "a*c.txt"; rm $f` will remove `a*c.txt` and `abc.txt`
## Rules about globbing with *variable*
Given two files: `a*c.txt`, `abc.txt`
| Cmd Type | example | Result |
| ----- | ------------------ | ------ |
| builtin | let f = "a*c.txt"; rm $f | remove `a*c.txt` |
| builtin | let f: glob = "a*c.txt"; rm $f | remove `a*c.txt` and
`abc.txt`
| builtin | let f = "a*c.txt"; rm ($f \| into glob) | remove `a*c.txt`
and `abc.txt`
| custom | def crm [f: glob] { rm $f }; let f = "a*c.txt"; crm $f |
remove `a*c.txt` and `abc.txt`
| custom | def crm [f: glob] { rm ($f \| into string) }; let f =
"a*c.txt"; crm $f | remove `a*c.txt`
| custom | def crm [f: string] { rm $f }; let f = "a*c.txt"; crm $f |
remove `a*c.txt`
| custom | def crm [f: string] { rm $f }; let f = "a*c.txt"; crm ($f \|
into glob) | remove `a*c.txt` and `abc.txt`
In general, if a variable is annotated with `glob` type, nushell will
expand glob pattern. Or else, we need to use `into | glob` to expand
glob pattern
# Tests + Formatting
Done
# After Submitting
I think `str glob-escape` command will be no-longer required. We can
remove it.
# Description
This PR removes unused dependencies. The `cargo machete --with-metadata`
tool was used to determine what is unused and then I recompiled. Putting
this up here to see what happens in MacOS and Linux in the CI and see if
anything breaks.
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->