# Description
When implementing a `Command`, one must also import all the types
present in the function signatures for `Command`. This makes it so that
we often import the same set of types in each command implementation
file. E.g., something like this:
```rust
use nu_protocol::ast::Call;
use nu_protocol::engine::{Command, EngineState, Stack};
use nu_protocol::{
record, Category, Example, IntoInterruptiblePipelineData, IntoPipelineData, PipelineData,
ShellError, Signature, Span, Type, Value,
};
```
This PR adds the `nu_engine::command_prelude` module which contains the
necessary and commonly used types to implement a `Command`:
```rust
// command_prelude.rs
pub use crate::CallExt;
pub use nu_protocol::{
ast::{Call, CellPath},
engine::{Command, EngineState, Stack},
record, Category, Example, IntoInterruptiblePipelineData, IntoPipelineData, IntoSpanned,
PipelineData, Record, ShellError, Signature, Span, Spanned, SyntaxShape, Type, Value,
};
```
This should reduce the boilerplate needed to implement a command and
also gives us a place to track the breadth of the `Command` API. I tried
to be conservative with what went into the prelude modules, since it
might be hard/annoying to remove items from the prelude in the future.
Let me know if something should be included or excluded.
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx
you can also mention related issues, PRs or discussions!
-->
# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.
Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
This PR adds a new evaluator path with callbacks to a mutable trait
object implementing a Debugger trait. The trait object can do anything,
e.g., profiling, code coverage, step debugging. Currently,
entering/leaving a block and a pipeline element is marked with
callbacks, but more callbacks can be added as necessary. Not all
callbacks need to be used by all debuggers; unused ones are simply empty
calls. A simple profiler is implemented as a proof of concept.
The debugging support is implementing by making `eval_xxx()` functions
generic depending on whether we're debugging or not. This has zero
computational overhead, but makes the binary slightly larger (see
benchmarks below). `eval_xxx()` variants called from commands (like
`eval_block_with_early_return()` in `each`) are chosen with a dynamic
dispatch for two reasons: to not grow the binary size due to duplicating
the code of many commands, and for the fact that it isn't possible
because it would make Command trait objects object-unsafe.
In the future, I hope it will be possible to allow plugin callbacks such
that users would be able to implement their profiler plugins instead of
having to recompile Nushell.
[DAP](https://microsoft.github.io/debug-adapter-protocol/) would also be
interesting to explore.
Try `help debug profile`.
## Screenshots
Basic output:

To profile with more granularity, increase the profiler depth (you'll
see that repeated `is-windows` calls take a large chunk of total time,
making it a good candidate for optimizing):

## Benchmarks
### Binary size
Binary size increase vs. main: **+40360 bytes**. _(Both built with
`--release --features=extra,dataframe`.)_
### Time
```nushell
# bench_debug.nu
use std bench
let test = {
1..100
| each {
ls | each {|row| $row.name | str length }
}
| flatten
| math avg
}
print 'debug:'
let res2 = bench { debug profile $test } --pretty
print $res2
```
```nushell
# bench_nodebug.nu
use std bench
let test = {
1..100
| each {
ls | each {|row| $row.name | str length }
}
| flatten
| math avg
}
print 'no debug:'
let res1 = bench { do $test } --pretty
print $res1
```
`cargo run --release -- bench_debug.nu` is consistently 1--2 ms slower
than `cargo run --release -- bench_nodebug.nu` due to the collection
overhead + gathering the report. This is expected. When gathering more
stuff, the overhead is obviously higher.
`cargo run --release -- bench_nodebug.nu` vs. `nu bench_nodebug.nu` I
didn't measure any difference. Both benchmarks report times between 97
and 103 ms randomly, without one being consistently higher than the
other. This suggests that at least in this particular case, when not
running any debugger, there is no runtime overhead.
## API changes
This PR adds a generic parameter to all `eval_xxx` functions that forces
you to specify whether you use the debugger. You can resolve it in two
ways:
* Use a provided helper that will figure it out for you. If you wanted
to use `eval_block(&engine_state, ...)`, call `let eval_block =
get_eval_block(&engine_state); eval_block(&engine_state, ...)`
* If you know you're in an evaluation path that doesn't need debugger
support, call `eval_block::<WithoutDebug>(&engine_state, ...)` (this is
the case of hooks, for example).
I tried to add more explanation in the docstring of `debugger_trait.rs`.
## TODO
- [x] Better profiler output to reduce spam of iterative commands like
`each`
- [x] Resolve `TODO: DEBUG` comments
- [x] Resolve unwraps
- [x] Add doc comments
- [x] Add usage and extra usage for `debug profile`, explaining all
columns
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
Hopefully none.
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
# Description
This splits off `scope` from `$nu`, creating a set of `scope` commands
for the various types of scope you might be interested in.
This also simplifies the `$nu` variable a bit.
# User-Facing Changes
This changes `$nu` to be a bit simpler and introduces a set of `scope`
subcommands.
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A
clippy::needless_collect -A clippy::result_large_err` to check that
you're using the standard code style
- `cargo test --workspace` to check that all tests pass
- `cargo run -- crates/nu-std/tests/run.nu` to run the tests for the
standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
This is an attempt to implement a new `Value::LazyRecord` variant for
performance reasons.
`LazyRecord` is like a regular `Record`, but it's possible to access
individual columns without evaluating other columns. I've implemented
`LazyRecord` for the special `$nu` variable; accessing `$nu` is
relatively slow because of all the information in `scope`, and [`$nu`
accounts for about 2/3 of Nu's startup time on
Linux](https://github.com/nushell/nushell/issues/6677#issuecomment-1364618122).
### Benchmarks
I ran some benchmarks on my desktop (Linux, 12900K) and the results are
very pleasing.
Nu's time to start up and run a command (`cargo build --release;
hyperfine 'target/release/nu -c "echo \"Hello, world!\""' --shell=none
--warmup 10`) goes from **8.8ms to 3.2ms, about 2.8x faster**.
Tests are also much faster! Running `cargo nextest` (with our very slow
`proptest` tests disabled) goes from **7.2s to 4.4s (1.6x faster)**,
because most tests involve launching a new instance of Nu.
### Design (updated)
I've added a new `LazyRecord` trait and added a `Value` variant wrapping
those trait objects, much like `CustomValue`. `LazyRecord`
implementations must implement these 2 functions:
```rust
// All column names
fn column_names(&self) -> Vec<&'static str>;
// Get 1 specific column value
fn get_column_value(&self, column: &str) -> Result<Value, ShellError>;
```
### Serializability
`Value` variants must implement `Serializable` and `Deserializable`, which poses some problems because I want to use unserializable things like `EngineState` in `LazyRecord`s. To work around this, I basically lie to the type system:
1. Add `#[typetag::serde(tag = "type")]` to `LazyRecord` to make it serializable
2. Any unserializable fields in `LazyRecord` implementations get marked with `#[serde(skip)]`
3. At the point where a `LazyRecord` normally would get serialized and sent to a plugin, I instead collect it into a regular `Value::Record` (which can be serialized)
# Description
This adds `break`, `continue`, `return`, and `loop`.
* `break` - breaks out a loop
* `continue` - continues a loop at the next iteration
* `return` - early return from a function call
* `loop` - loop forever (until the loop hits a break)
Examples:
```
for i in 1..10 {
if $i == 5 {
continue
}
print $i
}
```
```
for i in 1..10 {
if $i == 5 {
break
}
print $i
}
```
```
def foo [x] {
if true {
return 2
}
$x
}
foo 100
```
```
loop { print "hello, forever" }
```
```
[1, 2, 3, 4, 5] | each {|x|
if $x > 3 { break }
$x
}
```
# User-Facing Changes
Adds the above commands.
# Tests + Formatting
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A
clippy::needless_collect` to check that you're using the standard code
style
- `cargo test --workspace` to check that all tests pass
# After Submitting
If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
* query command with json, web, xml
* query xml now working
* clippy
* comment out web tests
* Initial work on query web
For now we can query everything except tables
* Support for querying tables
Now we can query multiple tables just like before, now the only thing
missing is the test coverage
* finish off
* comment out web test
Co-authored-by: Luccas Mateus de Medeiros Gomes <luccasmmg@gmail.com>
* Use only $nu.env.PWD for getting current directory
Because setting and reading to/from std::env changes the global state
shich is problematic if we call `cd` from multiple threads (e.g., in a
`par-each` block).
With this change, when engine-q starts, it will either inherit existing
PWD env var, or create a new one from `std::env::current_dir()`.
Otherwise, everything that needs the current directory will get it from
`$nu.env.PWD`. Each spawned external command will get its current
directory per-process which should be thread-safe.
One thing left to do is to patch nu-path for this as well since it uses
`std::env::current_dir()` in its expansions.
* Rename nu-path functions
*_with is not *_relative which should be more descriptive and frees
"with" for use in a followup commit.
* Clone stack every each iter; Fix some commands
Cloning the stack each iteration of `each` makes sure we're not reusing
PWD between iterations.
Some fixes in commands to make them use the new PWD.
* Post-rebase cleanup, fmt, clippy
* Change back _relative to _with in nu-path funcs
Didn't use the idea I had for the new "_with".
* Remove leftover current_dir from rebase
* Add cwd sync at merge_delta()
This makes sure the parser and completer always have up-to-date cwd.
* Always pass absolute path to glob in ls
* Do not allow PWD a relative path; Allow recovery
Makes it possible to recover PWD by proceeding with the REPL cycle.
* Clone stack in each also for byte/string stream
* (WIP) Start moving env variables to engine state
* (WIP) Move env vars to engine state (ugly)
Quick and dirty code.
* (WIP) Remove unused mut and args; Fmt
* (WIP) Fix dataframe tests
* (WIP) Fix missing args after rebase
* (WIP) Clone only env vars, not the whole stack
* (WIP) Add env var clone to `for` loop as well
* Minor edits
* Refactor merge_delta() to include stack merging.
Less error-prone than doing it manually.
* Clone env for each `update` command iteration
* Mark env var hidden only when found in eng. state
* Fix clippt warnings
* Add TODO about env var reading
* Do not clone empty environment in loops
* Remove extra cwd collection
* Split current_dir() into str and path; Fix autocd
* Make completions respect PWD env var
* get_columns is working in the columns command
* the new location of the get_columns method is nu-protocol/src/column.rs
* reference the new location of the get_columns method
* move get_columns to nu-engine
* Proof of concept treating env vars as Values
* Refactor env var collection and method name
* Remove unnecessary pub
* Move env translations into a new file
* Fix LS_COLORS to support any Value
* Fix spans during env var translation
* Add span to env var in cd
* Improve error diagnostics
* Fix non-string env vars failing string conversion
* Make PROMPT_COMMAND a Block instead of String
* Record host env vars to a fake file
This will give spans to env vars that would otherwise be without one.
Makes errors less confusing.
* Add 'env' command to list env vars
It will list also their values translated to strings
* Sort env command by name; Add env var type
* Remove obsolete test
* Allow environment variables to be hidden
This change allows environment variables in Nushell to have a value of
`Nothing`, which can be set by the user by passing `$nothing` to
`let-env` and friends.
Environment variables with a value of Nothing behave as if they are not
set at all. This allows a user to shadow the value of an environment
variable in a parent scope, effectively removing it from their current
scope. This was not possible before, because a scope can not affect its
parent scopes.
This is a workaround for issues like #3920.
Additionally, this allows a user to simultaneously set, change and
remove multiple environment variables via `load-env`. Any environment
variables set to $nothing will be hidden and thus act as if they are
removed. This simplifies working with virtual environments, which rely
on setting multiple environment variables, including PATH, to specific
values, and remove/change them on deactivation.
One surprising behavior is that an environment variable set to $nothing
will act as if it is not set when querying it (via $nu.env.X), but it is
still possible to remove it entirely via `unlet-env`. If the same
environment variable is present in the parent scope, the value in the
parent scope will be visible to the user. This might be surprising
behavior to users who are not familiar with the implementation details.
An additional corner case is the the shorthand form of `with-env` does
not work with this feature. Using `X=$nothing` will set $nu.env.X to the
string "$nothing". The long-form works as expected: `with-env [X
$nothing] {...}`.
* Remove unused import
* Allow all primitives to be convert to strings
In Nu we have variables (E.g. $var-name) and these contain `Value` types.
This means we can bind to variables any structured data and column path syntax
(E.g. `$variable.path.to`) allows flexibility for "querying" said structures.
Here we offer completions for these. For example, in a Nushell session the
variable `$nu` contains environment values among other things. If we wanted to
see in the screen some environment variable (say the var `SHELL`) we do:
```
> echo $nu.env.SHELL
```
with completions we can now do: `echo $nu.env.S[\TAB]` and we get suggestions
that start at the column path `$nu.env` with vars starting with the letter `S`
in this case `SHELL` appears in the suggestions.
* Revert "History, more test coverage improvements, and refactorings. (#3217)"
This reverts commit 8fc8fc89aa.
* Add tests
* Refactor .nu-env
* Change logic of Config write to logic of read()
* Fix reload always appends to old vars
* Fix reload always takes last_modified of global config
* Add reload_config in evaluation context
* Reload config after writing to it in cfg set / cfg set_into
* Add --no-history to cli options
* Use --no-history in tests
* Add comment about maybe_print_errors
* Get ctrl_exit var from context.global_config
* Use context.global_config in command "config"
* Add Readme in engine how env vars are now handled
* Update docs from autoenv command
* Move history_path from engine to nu_data
* Move load history out of if
* No let before return
* Add import for indexmap
Improvements overall to Nu. Also among the changes here, we can also be more confident towards incorporating `3041`. End to end tests for checking envs properly exported to externals is not added here (since it's in the other PR)
A few things added in this PR (probably forgetting some too)
* no writes happen to history during test runs.
* environment syncing end to end coverage added.
* clean up / refactorings few areas.
* testing API for finer control (can write tests passing more than one pipeline)
* can pass environment variables in tests that nu will inherit when running.
* No longer needed.
* no longer under a module. No need to use super.
* Move run_script to engine
* Add which dep and feature to engine
* Change unwrap to expect
* Add wasm specification
* Remove which from default, add specification correctly
* Add nu-platform-specifics
* Move is_external_cmd to platform_specifics
* Add is_external_cmd to host and use it instead of nu_platform directly
* Clean up if else logic in is_external_cmd
* Bump nu-platform-specifics version
* Pass context to print_err
* Commit cargo.lock
* Move print functions to own module inside nu-engine
* Hypocratic change to run windows-nightly again
* Add import for Ordering
* Move printing of error to host
* Move platform specific which functionality to basic host
* Allow no use of cmd_name
* Fix windows compile issue
* Split help message into brief and full help
Demonstrate on ansi command
Brief help is printed when running `help commands` so it doesn't clutter
the table. Full help is printed when normal help message is requested
(e.g., `help ansi`, `ansi --help`, etc.).
* Split long command descriptions
Some are not split, just edited to be shorter.
* Capitalize the usage of all commands
* Make sure every usage ends with dot
* Fix random typo
* move basic_shell_manager to nu-engine
* move basic_evaluation_context to nu-engine
* fix failing test in feature which commands/classified/external.rs
We split off the evaluation engine part of nu-cli into its own crate. This helps improve build times for nu-cli by 17% in my tests. It also helps us see a bit better what's the core engine portion vs the part specific to the interactive CLI piece.
There's more than can be done here, but I think it's a good start in the right direction.