Mistakes have been made. I forgot about a bunch of `todo`s in the helper
functions. So, this PR replaces them with proper errors. It also adds
tests for parse-time evaluation, because one `todo` I missed was in a
`run_const` function.
Based on the discussion in #13419.
## Description
Reworks the `decode`/`encode` commands by adding/changing the following
bases:
- `base32`
- `base32hex`
- `hex`
- `new-base64`
The `hex` base is compatible with the previous version of `hex` out of
the box (it only adds more flags). `base64` isn't, so the PR adds a new
version and deprecates the old one.
All commands have `string -> binary` signature for decoding and `string
| binary -> string` signature for encoding. A few `base64` encodings,
which are not a part of the
[RFC4648](https://datatracker.ietf.org/doc/html/rfc4648#section-6), have
been dropped.
## Example usage
```Nushell
~/fork/nushell> "string" | encode base32 | decode base32 | decode
string
```
```Nushell
~/fork/nushell> "ORSXG5A=" | decode base32
# `decode` always returns a binary value
Length: 4 (0x4) bytes | printable whitespace ascii_other non_ascii
00000000: 74 65 73 74 test
```
## User-Facing Changes
- New commands: `encode/decode base32/base32hex`.
- `encode hex` gets a `--lower` flag.
- `encode/decode base64` deprecated in favor of `encode/decode
new-base64`.
# Description
The meaning of the word usage is specific to describing how a command
function is *used* and not a synonym for general description. Usage can
be used to describe the SYNOPSIS or EXAMPLES sections of a man page
where the permitted argument combinations are shown or example *uses*
are given.
Let's not confuse people and call it what it is a description.
Our `help` command already creates its own *Usage* section based on the
available arguments and doesn't refer to the description with usage.
# User-Facing Changes
`help commands` and `scope commands` will now use `description` or
`extra_description`
`usage`-> `description`
`extra_usage` -> `extra_description`
Breaking change in the plugin protocol:
In the signature record communicated with the engine.
`usage`-> `description`
`extra_usage` -> `extra_description`
The same rename also takes place for the methods on
`SimplePluginCommand` and `PluginCommand`
# Tests + Formatting
- Updated plugin protocol specific changes
# After Submitting
- [ ] update plugin protocol doc
# Description
The previous behaviour of `into record` on lists was to create a new
record with each list index as the key. This was not very useful for
creating meaningful records, though, and most people would end up using
commands like `headers` or `transpose` to turn a list of keys and values
into a record.
This PR changes that instead to do what I think the most ergonomic thing
is, and instead:
- A list of records is merged into one record.
- A list of pairs (two element lists) is folded into a record with the
first element of each pair being the key, and the second being the
value.
The former is just generally more useful than having to use `reduce`
with `merge` for such a common operation, and the latter is useful
because it means that `$a | zip $b | into record` *just works* in the
way that seems most obvious.
Example:
```nushell
[[foo bar] [baz quux]] | into record # => {foo: bar, baz: quux}
[{foo: bar} {baz: quux}] | into record # => {foo: bar, baz: quux}
[foo baz] | zip [bar quux] | into record # => {foo: bar, baz: quux}
```
The support for range input has been removed, as it would no longer
reflect the treatment of an equivalent list.
The following is equivalent to the old behavior, in case that's desired:
```
0.. | zip [a b c] | into record # => {0: a, 1: b, 2: c}
```
# User-Facing Changes
- `into record` changed as described above (breaking)
- `into record` no longer supports range input (breaking)
# Tests + Formatting
Examples changed to match, everything works. Some usage in stdlib and
`nu_plugin_nu_example` had to be changed.
# After Submitting
- [ ] release notes (commands, breaking change)
# Description
Previously when nushell failed to parse the content type header, it
would emit an error instead of returning the response. Now it will fall
back to `text/plain` (which, in turn, will trigger type detection based
on file extension).
May fix (potentially) nushell/nushell#11927
Refs:
https://discord.com/channels/601130461678272522/614593951969574961/1272895236489613366
Supercedes: #13609
# User-Facing Changes
It's now possible to fetch content even if the server returns an invalid
content type header. Users may need to parse the response manually, but
it's still better than not getting the response at all.
# Tests + Formatting
Added a test for the new behaviour.
# After Submitting
# Description
Fixes: #13479
# User-Facing Changes
Given the following setup:
```
cd /tmp
touch src_file.txt
ln -s src_file.txt link1
```
### Before
```
ls -lf link1 | get target.0 # It outputs src_file.txt
```
### After
```
ls -lf link1 | get target.0 # It outputs /tmp/src_file.txt
```
# Tests + Formatting
Added a test for the change
# Description
Something I meant to add a long time ago. We currently don't have a
convenient way to print raw binary data intentionally. You can pipe it
through `cat` to turn it into an unknown stream, or write it to a file
and read it again, but we can't really just e.g. generate msgpack and
write it to stdout without this. For example:
```nushell
[abc def] | to msgpack | print --raw
```
This is useful for nushell scripts that will be piped into something
else. It also means that `nu_plugin_nu_example` probably doesn't need to
do this anymore, but I haven't adjusted it yet:
```nushell
def tell_nushell_encoding [] {
print -n "\u{0004}json"
}
```
This happens to work because 0x04 is a valid UTF-8 character, but it
wouldn't be possible if it were something above 0x80.
`--raw` also formats other things without `table`, I figured the two
things kind of go together. The output is kind of like `to text`.
Debatable whether that should share the same flag, but it was easier
that way and seemed reasonable.
# User-Facing Changes
- `print` new flag: `--raw`
# Tests + Formatting
Added tests.
# After Submitting
- [ ] release notes (command modified)
This PR closes [Issue
#13482](https://github.com/nushell/nushell/issues/13482)
# Description
This PR tend to make all math function to be constant.
# User-Facing Changes
The math commands now can be used as constant methods.
### Some Example
```
> const MODE = [3 3 9 12 12 15] | math mode
> $MODE
╭───┬────╮
│ 0 │ 3 │
│ 1 │ 12 │
╰───┴────╯
> const LOG = [16 8 4] | math log 2
> $LOG
╭───┬──────╮
│ 0 │ 4.00 │
│ 1 │ 3.00 │
│ 2 │ 2.00 │
╰───┴──────╯
> const VAR = [1 3 5] | math variance
> $VAR
2.6666666666666665
```
# Tests + Formatting
Tests are added for all of the math command to test there constant
behavior.
I mostly focused on the actual user experience, not the correctness of
the methods and algorithms.
# After Submitting
I think this change don't require any additional documentation. Feel
free to correct me in this topic please.
# Description
Attempt to guess the content type of a file when opening with --raw and
set it in the pipeline metadata.
<img width="644" alt="Screenshot 2024-08-02 at 11 30 10"
src="https://github.com/user-attachments/assets/071f0967-c4dd-405a-b8c8-f7aa073efa98">
# User-Facing Changes
- Content of files can be directly piped into commands like `http post`
with the content type set appropriately when using `--raw`.
# Description
Part 4 of replacing std::path types with nu_path types added in
https://github.com/nushell/nushell/pull/13115. This PR migrates various
tests throughout the code base.
- **Doccomment style fixes**
- **Forgotten stuff in `nu-pretty-hex`**
- **Don't `for` around an `Option`**
- and more
I think the suggestions here are a net positive, some of the suggestions
moved into #13498 feel somewhat arbitrary, I also raised
https://github.com/rust-lang/rust-clippy/issues/13188 as the nightly
`byte_char_slices` would require either a global allow or otherwise a
ton of granular allows or possibly confusing bytestring literals.
# Description
Fixes: #13260
When user run a command like this:
```nushell
$env.FOO = " New";
$env.BAZ = " New Err";
do -i {nu -n -c 'nu --testbin echo_env FOO; nu --testbin echo_env_stderr BAZ'} | save -a -r save_test_22/log.txt
```
`save` command sinks the output of previous commands' stderr output. I
think it should be `stderr`.
# User-Facing Changes
```nushell
$env.FOO = " New";
$env.BAZ = " New Err";
do -i {nu -n -c 'nu --testbin echo_env FOO; nu --testbin echo_env_stderr BAZ'} | save -a -r save_test_22/log.txt
```
The command will output ` New Err` to stderr.
# Tests + Formatting
Added 2 cases.
- **Suggested default impl for the new `*Stack`s**
- **Change a hashmap to make clippy happy**
- **Clone from fix**
- **Fix conditional unused in test**
- then **Bump rust toolchain**
# Description
This makes assignment operations and `const` behave the same way `let`
and `mut` do, absorbing the rest of the pipeline.
Changes the lexer to be able to recognize assignment operators as a
separate token, and then makes the lite parser continue to push spans
into the same command regardless of any redirections or pipes if an
assignment operator is encountered. Because the pipeline is no longer
split up by the lite parser at this point, it's trivial to just parse
the right hand side as if it were a subexpression not contained within
parentheses.
# User-Facing Changes
Big breaking change. These are all now possible:
```nushell
const path = 'a' | path join 'b'
mut x = 2
$x = random int
$x = [1 2 3] | math sum
$env.FOO = random chars
```
In the past, these would have led to (an attempt at) bare word string
parsing. So while `$env.FOO = bar` would have previously set the
environment variable `FOO` to the string `"bar"`, it now tries to run
the command named `bar`, hence the major breaking change.
However, this is desirable because it is very consistent - if you see
the `=`, you can just assume it absorbs everything else to the right of
it.
# Tests + Formatting
Added tests for the new behaviour. Adjusted some existing tests that
depended on the right hand side of assignments being parsed as
barewords.
# After Submitting
- [ ] release notes (breaking change!)
# Description
Close: #12083Close: #12084
# User-Facing Changes
It's a breaking change because we have switched the position of
`<initial>` and `<closure>`, after the change, initial value will be
optional. So it's possible to do something like this:
```nushell
> let f = {|fib = [0, 1]| {out: $fib.0, next: [$fib.1, ($fib.0 + $fib.1)]} }
> generate $f | first 5
╭───┬───╮
│ 0 │ 0 │
│ 1 │ 1 │
│ 2 │ 1 │
│ 3 │ 2 │
│ 4 │ 3 │
╰───┴───╯
```
It will also raise error if user don't give initial value, and the
closure don't have default parameter.
```nushell
❯ let f = {|fib| {out: $fib.0, next: [$fib.1, ($fib.0 + $fib.1)]} }
❯ generate $f
Error: × The initial value is missing
╭─[entry #5:1:1]
1 │ generate $f
· ────┬───
· ╰── Missing intial value
╰────
help: Provide <initial> value in generate, or assigning default value to closure parameter
```
# Tests + Formatting
Added some test cases.
---------
Co-authored-by: Stefan Holderbach <sholderbach@users.noreply.github.com>
# Description
Following from #13377, this PR refactors the related `window` command.
In the case where `window` should behave exactly like `chunks`, it now
reuses the same code that `chunks` does. Otherwise, the other cases have
been rewritten and have resulted in a performance increase:
| window size | stride | old time (ms) | new time (ms) |
| -----------:| ------:| -------------:| -------------:|
| 20 | 1 | 757 | 722 |
| 2 | 1 | 364 | 333 |
| 1 | 1 | 343 | 293 |
| 20 | 20 | 90 | 63 |
| 2 | 2 | 215 | 175 |
| 20 | 30 | 74 | 60 |
| 2 | 4 | 141 | 110 |
# User-Facing Changes
`window` will now error if the window size or stride is 0, which is
technically a breaking change.
# Description
This grew quite a bit beyond its original scope, but I've tried to make
`$in` a bit more consistent and easier to work with.
Instead of the parser generating calls to `collect` and creating
closures, this adds `Expr::Collect` which just evaluates in the same
scope and doesn't require any closure.
When `$in` is detected in an expression, it is replaced with a new
variable (also called `$in`) and wrapped in `Expr::Collect`. During
eval, this expression is evaluated directly, with the input and with
that new variable set to the collected value.
Other than being faster and less prone to gotchas, it also makes it
possible to typecheck the output of an expression containing `$in`,
which is nice. This is a breaking change though, because of the lack of
the closure and because now typechecking will actually happen. Also, I
haven't attempted to typecheck the input yet.
The IR generated now just looks like this:
```gas
collect %in
clone %tmp, %in
store-variable $in, %tmp
# %out <- ...expression... <- %in
drop-variable $in
```
(where `$in` is the local variable created for this collection, and not
`IN_VARIABLE_ID`)
which is a lot better than having to create a closure and call `collect
--keep-env`, dealing with all of the capture gathering and allocation
that entails. Ideally we can also detect whether that input is actually
needed, so maybe we don't have to clone, but I haven't tried to do that
yet. Theoretically now that the variable is a unique one every time, it
should be possible to give it a type - I just don't know how to
determine that yet.
On top of that, I've also reworked how `$in` works in pipeline-initial
position. Previously, it was a little bit inconsistent. For example,
this worked:
```nushell
> 3 | do { let x = $in; let y = $in; print $x $y }
3
3
```
However, this causes a runtime variable not found error on the second
`$in`:
```nushell
> def foo [] { let x = $in; let y = $in; print $x $y }; 3 | foo
Error: nu:🐚:variable_not_found
× Variable not found
╭─[entry #115:1:35]
1 │ def foo [] { let x = $in; let y = $in; print $x $y }; 3 | foo
· ─┬─
· ╰── variable not found
╰────
```
I've fixed this by making the first element `$in` detection *always*
happen at the block level, so if you use `$in` in pipeline-initial
position anywhere in a block, it will collect with an implicit
subexpression around the whole thing, and you can then use that `$in`
more than once. In doing this I also rewrote `parse_pipeline()` and
hopefully it's a bit more straightforward and possibly more efficient
too now.
Finally, I've tried to make `let` and `mut` a lot more straightforward
with how they handle the rest of the pipeline, and using a redirection
with `let`/`mut` now does what you'd expect if you assume that they
consume the whole pipeline - the redirection is just processed as
normal. These both work now:
```nushell
let x = ^foo err> err.txt
let y = ^foo out+err>| str length
```
It was previously possible to accomplish this with a subexpression, but
it just seemed like a weird gotcha that you couldn't do it. Intuitively,
`let` and `mut` just seem to take the whole line.
- closes#13137
# User-Facing Changes
- `$in` will behave more consistently with blocks and closures, since
the entire block is now just wrapped to handle it if it appears in the
first pipeline element
- `$in` no longer creates a closure, so what can be done within an
expression containing `$in` is less restrictive
- `$in` containing expressions are now type checked, rather than just
resulting in `any`. However, `$in` itself is still `any`, so this isn't
quite perfect yet
- Redirections are now allowed in `let` and `mut` and behave pretty much
how you'd expect
# Tests + Formatting
Added tests to cover the new behaviour.
# After Submitting
- [ ] release notes (definitely breaking change)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx
you can also mention related issues, PRs or discussions!
-->
# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.
Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
Replaces the `dirs_next` family of crates with `dirs`. `dirs_next` was
born when the `dirs` crates were abandoned three years ago, but they're
being maintained again and most projects depend on `dirs` nowadays.
`dirs_next` has been abandoned since.
This came up while working on
https://github.com/nushell/nushell/pull/13382.
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
None.
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
Tests and formatter have been run.
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
# Description
Before this change `default` would dive into lists and replace `null`
elements with the default value.
Therefore there was no easy way to process `null|list<null|string>`
input and receive `list<null|string>`
(IOW to apply the default on the top level only).
However it's trivially easy to apply default values to list elements
with the `each` command:
```nushell
[null, "a", null] | each { default "b" }
```
So there's no need to have this behavior in `default` command.
# User-Facing Changes
* `default` no longer dives into lists to replace `null` elements with
the default value.
# Tests + Formatting
Added a couple of tests for the new (and old) behavior.
# After Submitting
* Update docs.
# Description
The name of the `group` command is a little unclear/ambiguous.
Everything I look at it, I think of `group-by`. I think `chunks` more
clearly conveys what the `group` command does. Namely, it divides the
input list into chunks of a certain size. For example,
[`slice::chunks`](https://doc.rust-lang.org/std/primitive.slice.html#method.chunks)
has the same name. So, this PR adds a new `chunks` command to replace
the now deprecated `group` command.
The `chunks` command is a refactored version of `group`. As such, there
is a small performance improvement:
```nushell
# $data is a very large list
> bench { $data | chunks 2 } --rounds 30 | get mean
474ms 921µs 190ns
# deprecation warning was disabled here for fairness
> bench { $data | group 2 } --rounds 30 | get mean
592ms 702µs 440ns
> bench { $data | chunks 200 } --rounds 30 | get mean
374ms 188µs 318ns
> bench { $data | group 200 } --rounds 30 | get mean
481ms 264µs 869ns
> bench { $data | chunks 1 } --rounds 30 | get mean
642ms 574µs 42ns
> bench { $data | group 1 } --rounds 30 | get mean
981ms 602µs 513ns
```
# User-Facing Changes
- `group` command has been deprecated in favor of new `chunks` command.
- `chunks` errors when given a chunk size of `0` whereas `group` returns
chunks with one element.
# Tests + Formatting
Added tests for `chunks`, since `group` did not have any tests.
# After Submitting
Update book if necessary.
# Description
Fixes#13359
In an attempt to generate names for flat columns resulting from a nested
accesses #3016 generated new column names on nested selection, out of
convenience, that composed the cell path as a string (including `.`) and
then simply replaced all `.` with `_`. As we permit `.` in column names
as long as you quote this surprisingly alters `select`ed columns.
# User-Facing Changes
New columns generated by selection with nested cell paths will for now
be named with a string containing the keys separated by `.` instead of
`_`. We may want to reconsider the semantics for nested access.
# Tests + Formatting
- Alter test to breaking change on nested `select`
Somehow this logic was missed on my end. ( I mean I was not even
thinking about it in original patch 😄 )
Please recheck
Added a regression test too.
close#13336
cc: @fdncred
GOOD CATCH.............................................................
SORRY
I've added a test to catch regression just in case.
close#13319
cc: @fdncred
# Description
Bug fix: `PipelineData::check_external_failed()` was not preserving the
original `type_` and `known_size` attributes of the stream passed in for
streams that come from children, so `external-command | into binary` did
not work properly and always ended up still being unknown type.
# User-Facing Changes
The following test case now works as expected:
```nushell
> head -c 2 /dev/urandom | into binary
# Expected: pretty hex dump of binary
# Previous behavior: just raw binary in the terminal
```
# Tests + Formatting
Added a test to cover this to `into binary`
# Description
Provides the ability to use http commands as part of a pipeline.
Additionally, this pull requests extends the pipeline metadata to add a
content_type field. The content_type metadata field allows commands such
as `to json` to set the metadata in the pipeline allowing the http
commands to use it when making requests.
This pull request also introduces the ability to directly stream http
requests from streaming pipelines.
One other small change is that Content-Type will always be set if it is
passed in to the http commands, either indirectly or throw the content
type flag. Previously it was not preserved with requests that were not
of type json or form data.
# User-Facing Changes
* `http post`, `http put`, `http patch`, `http delete` can be used as
part of a pipeline
* `to text`, `to json`, `from json` all set the content_type metadata
field and the http commands will utilize them when making requests.
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx
you can also mention related issues, PRs or discussions!
-->
fixes#13245
# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.
Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
In addition to addressing #13245, this PR also updated one of the doc
example to the `find` command to document that non-regex mode is case
insensitive, which may surprise some users.
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
---------
Co-authored-by: Ben Yang <ben@ya.ng>
Co-authored-by: Darren Schroeder <343840+fdncred@users.noreply.github.com>
# Description
fixed#11678
The sub-commands of from command (`from {csv, tsv, ssv}`) name columns
starting from index 0.
This behaviour is inconsistent with other commands such as `detect
columns`.
This PR makes the subcommands index 0-based.
# User-Facing Changes
The subcommands (`from {csv, tsv, ssv}`) return a table with the columns
starting at index 0 if no header data is passed.
```
~/Development/nushell> "foo bar baz" | from ssv -n -m 1
╭───┬─────────┬─────────┬─────────╮
│ # │ column0 │ column1 │ column2 │
├───┼─────────┼─────────┼─────────┤
│ 0 │ foo │ bar │ baz │
╰───┴─────────┴─────────┴─────────╯
~/Development/nushell> "foo,bar,baz" | from csv -n
╭───┬─────────┬─────────┬─────────╮
│ # │ column0 │ column1 │ column2 │
├───┼─────────┼─────────┼─────────┤
│ 0 │ foo │ bar │ baz │
╰───┴─────────┴─────────┴─────────╯
~/Development/nushell> "foo\tbar\tbaz" | from tsv -n
╭───┬─────────┬─────────┬─────────╮
│ # │ column0 │ column1 │ column2 │
├───┼─────────┼─────────┼─────────┤
│ 0 │ foo │ bar │ baz │
╰───┴─────────┴─────────┴─────────╯
```
# Tests + Formatting
When I ran tests, `commands::touch::change_file_mtime_to_reference`
failed with the following error.
The error also occurs in the master branch, so it's probably unrelated
to these changes.
(maybe a problem with my dev environment)
```
$ toolkit check pr
~~~~~~~~
failures:
---- commands::touch::change_file_mtime_to_reference stdout ----
=== stderr
thread 'commands::touch::change_file_mtime_to_reference' panicked at crates/nu-command/tests/commands/touch.rs:298:9:
assertion `left == right` failed
left: SystemTime { tv_sec: 1719149697, tv_nsec: 57576929 }
right: SystemTime { tv_sec: 1719149697, tv_nsec: 78219489 }
failures:
commands::touch::change_file_mtime_to_reference
test result: FAILED. 1533 passed; 1 failed; 32 ignored; 0 measured; 0 filtered out; finished in 10.87s
error: test failed, to rerun pass `-p nu-command --test main`
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🔴 `toolkit test`
- ⚫ `toolkit test stdlib`
```
# After Submitting
nothing
# Description
@hustcer reported that slashes were disappearing from external args
since #13089:
```
$> ossutil ls oss://abc/b/c
Error: invalid cloud url: "oss:/abc/b/c", please make sure the url starts with: "oss://"
$> ossutil ls 'oss://abc/b/c'
Error: oss: service returned error: StatusCode=403, ErrorCode=UserDisable, ErrorMessage="UserDisable", RequestId=66791EDEFE87B73537120838, Ec=0003-00000801, Bucket=abc, Object=
```
I narrowed this down to the ndots handling, since that does path parsing
and path reconstruction in every case. I decided to change that so that
it only activates if the string contains at least `...`, since that
would be the minimum trigger for ndots, and also to not activate it if
the string contains `://`, since it's probably undesirable for a URL.
Kind of a hack, but I'm not really sure how else we decide whether
someone wants ndots or not.
# User-Facing Changes
- bare strings not containing ndots are not modified
- bare strings containing `://` are not modified
# Tests + Formatting
Added tests to prevent regression.
# Description
* As discussed in the comments in #11954, this suppresses the index
column on `cal` output. It does that by running `table -i false` on the
results by default.
* Added new `--as-table/-t` flag to revert to the old behavior and
output the calendar as structured data
* Updated existing tests to use `--as-table`
* Added new tests against the string output
* Updated `length` test which also used `cal`
* Added new example for `--as-table`, with result
# User-Facing Changes
## Breaking change
The *default* `cal` output has changed from a `list` to a `string`. To
obtain structured data from `cal`, use the new `--as-table/-t` flag.
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
# Description
We've had a lot of different issues and PRs related to arg handling with
externals since the rewrite of `run-external` in #12921:
- #12950
- #12955
- #13000
- #13001
- #13021
- #13027
- #13028
- #13073
Many of these are caused by the argument handling of external calls and
`run-external` being very special and involving the parser handing
quoted strings over to `run-external` so that it knows whether to expand
tildes and globs and so on. This is really unusual and also makes it
harder to use `run-external`, and also harder to understand it (and
probably is part of the reason why it was rewritten in the first place).
This PR moves a lot more of that work over to the parser, so that by the
time `run-external` gets it, it's dealing with much more normal Nushell
values. In particular:
- Unquoted strings are handled as globs with no expand
- The unescaped-but-quoted handling of strings was removed, and the
parser constructs normal looking strings instead, removing internal
quotes so that `run-external` doesn't have to do it
- Bare word interpolation is now supported and expansion is done in this
case
- Expressions typed as `Glob` containing `Expr::StringInterpolation` now
produce `Value::Glob` instead, with the quoted status from the expr
passed through so we know if it was a bare word
- Bare word interpolation for values typed as `glob` now possible, but
not implemented
- Because expansion is now triggered by `Value::Glob(_, false)` instead
of looking at the expr, externals now support glob types
# User-Facing Changes
- Bare word interpolation works for external command options, and
otherwise embedded in other strings:
```nushell
^echo --foo=(2 + 2) # prints --foo=4
^echo -foo=$"(2 + 2)" # prints -foo=4
^echo foo="(2 + 2)" # prints (no interpolation!) foo=(2 + 2)
^echo foo,(2 + 2),bar # prints foo,4,bar
```
- Bare word interpolation expands for external command head/args:
```nushell
let name = "exa"
~/.cargo/bin/($name) # this works, and expands the tilde
^$"~/.cargo/bin/($name)" # this doesn't expand the tilde
^echo ~/($name)/* # this glob is expanded
^echo $"~/($name)/*" # this isn't expanded
```
- Ndots are now supported for the head of an external command
(`^.../foo` works)
- Glob values are now supported for head/args of an external command,
and expanded appropriately:
```nushell
^("~/.cargo/bin/exa" | into glob) # the tilde is expanded
^echo ("*.txt" | into glob) # this glob is expanded
```
- `run-external` now works more like any other command, without
expecting a special call convention
for its args:
```nushell
run-external echo "'foo'"
# before PR: 'foo'
# after PR: foo
run-external echo "*.txt"
# before PR: (glob is expanded)
# after PR: *.txt
```
# Tests + Formatting
Lots of tests added and cleaned up. Some tests that weren't active on
Windows changed to use `nu --testbin cococo` so that they can work.
Added a test for Linux only to make sure tilde expansion of commands
works, because changing `HOME` there causes `~` to reliably change.
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
- [ ] release notes: make sure to mention the new syntaxes that are
supported
# Description
This fixes issues with trying to run the tests with a terminal that is
small enough to cause errors to wrap around, or in cases where the test
environment might produce strings that are reasonably expected to wrap
around anyway. "Fancy" errors are too fancy for tests to work
predictably 😉
cc @abusch
# User-Facing Changes
- Added `--error-style` option for use with `--commands` (like
`--table-mode`)
# Tests + Formatting
Surprisingly, all of the tests pass, including in small windows! I only
had to make one change to a test for `error make` which was looking for
the box drawing characters miette uses to determine whether the span
label was showing up - but the plain error style output is even better
and easier to match on, so this test is actually more specific now.
# Description
Fixes: #13105Fixes: #13077
This pr makes `str substring`, `bytes at` work better with negative
index.
And it also fixes the false range semantic on `detect columns -c` in
some cases.
# User-Facing Changes
For `str substring`, `bytes at`, it will no-longer return an error if
start index is larger than end index. It makes sense to return an empty
string of empty bytes directly.
### Before
```nushell
# str substring
❯ ("aaa" | str substring 2..-3) == ""
Error: nu:🐚:type_mismatch
× Type mismatch.
╭─[entry #23:1:10]
1 │ ("aaa" | str substring 2..-3) == ""
· ──────┬──────
· ╰── End must be greater than or equal to Start
2 │ true
╰────
# bytes at
❯ ("aaa" | encode utf-8 | bytes at 2..-3) == ("" | encode utf-8)
Error: nu:🐚:type_mismatch
× Type mismatch.
╭─[entry #27:1:25]
1 │ ("aaa" | encode utf-8 | bytes at 2..-3) == ("" | encode utf-8)
· ────┬───
· ╰── End must be greater than or equal to Start
╰────
```
### After
```nushell
# str substring
❯ ("aaa" | str substring 2..-3) == ""
true
# bytes at
❯ ("aaa" | encode utf-8 | bytes at 2..-3) == ("" | encode utf-8)
true
```
# Tests + Formatting
Added some tests, adjust existing tests
# Description
Removes the `which-support` cargo feature and makes all of its
feature-gated code enabled by default in all builds. I'm not sure why
this one command is gated behind a feature. It seems to be a relic of
older code where we had features for what seems like every command.
# Description
This PR updates the uutils/coreutils crates to the latest released
version.
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx
you can also mention related issues, PRs or discussions!
-->
# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.
Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
Part of https://github.com/nushell/nushell/issues/12963, step 2.
This PR refactors changes the use of `expression.span` to
`expression.span_id` via a new helper `Expression::span()`. A new
`GetSpan` is added to abstract getting the span from both `EngineState`
and `StateWorkingSet`.
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
`format pattern` loses the ability to use variables in the pattern,
e.g., `... | format pattern 'value of {$it.name} is {$it.value}'`. This
is because the command did a custom parse-eval cycle, creating spans
that are not merged into the main engine state. We could clone the
engine state, add Clone trait to StateDelta and merge the cloned delta
to the cloned state, but IMO there is not much value from having this
ability, since we have string interpolation nowadays: `... | $"value of
($in.name) is ($in.value)"`.
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
# Description
Enable the `preserve_order` feature of the `toml` crate to preserve the
ordering of elements when converting from/to toml.
Additionally, use `to_string_pretty()` instead of `to_string()` in `to
toml`. This displays arrays on multiple lines instead of one big single
line. I'm not sure if this one is a good idea or not... Happy to remove
this from this PR if it's not.
# User-Facing Changes
The order of elements will be different when using `from toml`. The
formatting of arrays will also be different when using `to toml`. For
example:
- before
```
❯ "foo=1\nbar=2\ndoo=3" | from toml
╭─────┬───╮
│ bar │ 2 │
│ doo │ 3 │
│ foo │ 1 │
╰─────┴───╯
❯ {a: [a b c d]} | to toml
a = ["a", "b", "c", "d"]
```
- after
```
❯ "foo=1\nbar=2\ndoo=3" | from toml
╭─────┬───╮
│ foo │ 1 │
│ bar │ 2 │
│ doo │ 3 │
╰─────┴───╯
❯ {a: [a b c d]} | to toml
a = [
"a",
"b",
"c",
"d",
]
```
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🔴 `toolkit test`
- ⚫ `toolkit test stdlib`
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
# Description
Fixes https://github.com/nushell/nushell/issues/12968. After apply this
patch, we can use explict plus sign character included string with `into
filesize` cmd.
# User-Facing Changes
AS-IS (before fixing)
```
$ "+8 KiB" | into filesize
Error: nu:🐚:cant_convert
× Can't convert to int.
╭─[entry #31:1:1]
1 │ "+8 KiB" | into filesize
· ────┬───
· ╰── can't convert string to int
╰────
```
TO-BE (after fixing)
```
$ "+8KiB" | into filesize
8.0 KiB
```
# Tests + Formatting
Added a test
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
This PR fixes the `path type` command so that it resolves relative paths
using PWD from the engine state.
As a bonus, it also fixes the issue of `path type` returning an empty
string instead of an error when it fails.
# Description
Makes the `from json --objects` command produce a stream, and read
lazily from an input stream to produce its output.
Also added a helper, `PipelineData::get_type()`, to make it easier to
construct a wrong type error message when matching on `PipelineData`. I
expect checking `PipelineData` for either a string value or an `Unknown`
or `String` typed `ByteStream` will be very, very common. I would have
liked to have a helper that just returns a readable stream from either,
but that would either be a bespoke enum or a `Box<dyn BufRead>`, which
feels like it wouldn't be so great for performance. So instead, taking
the approach I did here is probably better - having a function that
accepts the `impl BufRead` and matching to use it.
# User-Facing Changes
- `from json --objects` no longer collects its input, and can be used
for large datasets or streams that produce values over time.
# Tests + Formatting
All passing.
# After Submitting
- [ ] release notes
---------
Co-authored-by: Ian Manske <ian.manske@pm.me>
# Description
Fixes: https://github.com/nushell/nushell/issues/7761
It's still unsure if we want to change the `range semantic` itself, but
it's good to keep range semantic consistent between nushell commands.
# User-Facing Changes
### Before
```nushell
❯ "abc" | str substring 1..=2
b
```
### After
```nushell
❯ "abc" | str substring 1..=2
bc
```
# Tests + Formatting
Adjust tests to fit new behavior
# Description
Removes the old `nu-cmd-dataframe` crate in favor of the polars plugin.
As such, this PR also removes the `dataframe` feature, related CI, and
full releases of nushell.
# Description
This PR allows byte streams to optionally be colored as being
specifically binary or string data, which guarantees that they'll be
converted to `Binary` or `String` appropriately on `into_value()`,
making them compatible with `Type` guarantees. This makes them
significantly more broadly usable for command input and output.
There is still an `Unknown` type for byte streams coming from external
commands, which uses the same behavior as we previously did where it's a
string if it's UTF-8.
A small number of commands were updated to take advantage of this, just
to prove the point. I will be adding more after this merges.
# User-Facing Changes
- New types in `describe`: `string (stream)`, `binary (stream)`
- These commands now return a stream if their input was a stream:
- `into binary`
- `into string`
- `bytes collect`
- `str join`
- `first` (binary)
- `last` (binary)
- `take` (binary)
- `skip` (binary)
- Streams that are explicitly binary colored will print as a streaming
hexdump
- example:
```nushell
1.. | each { into binary } | bytes collect
```
# Tests + Formatting
I've added some tests to cover it at a basic level, and it doesn't break
anything existing, but I do think more would be nice. Some of those will
come when I modify more commands to stream.
# After Submitting
There are a few things I'm not quite satisfied with:
- **String trimming behavior.** We automatically trim newlines from
streams from external commands, but I don't think we should do this with
internal commands. If I call a command that happens to turn my string
into a stream, I don't want the newline to suddenly disappear. I changed
this to specifically do it only on `Child` and `File`, but I don't know
if this is quite right, and maybe we should bring back the old flag for
`trim_end_newline`
- **Known binary always resulting in a hexdump.** It would be nice to
have a `print --raw`, so that we can put binary data on stdout
explicitly if we want to. This PR doesn't change how external commands
work though - they still dump straight to stdout.
Otherwise, here's the normal checklist:
- [ ] release notes
- [ ] docs update for plugin protocol changes (added `type` field)
---------
Co-authored-by: Ian Manske <ian.manske@pm.me>
# Description
This changes the `collect` command so that it doesn't require a closure.
Still allowed, optionally.
Before:
```nushell
open foo.json | insert foo bar | collect { save -f foo.json }
```
After:
```nushell
open foo.json | insert foo bar | collect | save -f foo.json
```
The closure argument isn't really necessary, as collect values are also
supported as `PipelineData`.
# User-Facing Changes
- `collect` command changed
# Tests + Formatting
Example changed to reflect.
# After Submitting
- [ ] release notes
- [ ] we may want to deprecate the closure arg?
# Description
Fixes: #12690
The issue is happened after
https://github.com/nushell/nushell/pull/12056 is merged. It will raise
error if user doesn't supply required parameter when run closure with
do.
And parser adds a `$it` parameter when parsing closure or block
expression.
I believe the previous behavior is because we allow such syntax on
previous version(0.44):
```nushell
let x = { print $it }
```
But it's no longer allowed after 0.60. So I think they can be removed.
# User-Facing Changes
```nushell
let tmp = {
let it = 42
print $it
}
do -c $tmp
```
should be possible again.
# Tests + Formatting
Added 1 test
# Description
This PR introduces a `ByteStream` type which is a `Read`-able stream of
bytes. Internally, it has an enum over three different byte stream
sources:
```rust
pub enum ByteStreamSource {
Read(Box<dyn Read + Send + 'static>),
File(File),
Child(ChildProcess),
}
```
This is in comparison to the current `RawStream` type, which is an
`Iterator<Item = Vec<u8>>` and has to allocate for each read chunk.
Currently, `PipelineData::ExternalStream` serves a weird dual role where
it is either external command output or a wrapper around `RawStream`.
`ByteStream` makes this distinction more clear (via `ByteStreamSource`)
and replaces `PipelineData::ExternalStream` in this PR:
```rust
pub enum PipelineData {
Empty,
Value(Value, Option<PipelineMetadata>),
ListStream(ListStream, Option<PipelineMetadata>),
ByteStream(ByteStream, Option<PipelineMetadata>),
}
```
The PR is relatively large, but a decent amount of it is just repetitive
changes.
This PR fixes#7017, fixes#10763, and fixes#12369.
This PR also improves performance when piping external commands. Nushell
should, in most cases, have competitive pipeline throughput compared to,
e.g., bash.
| Command | Before (MB/s) | After (MB/s) | Bash (MB/s) |
| -------------------------------------------------- | -------------:|
------------:| -----------:|
| `throughput \| rg 'x'` | 3059 | 3744 | 3739 |
| `throughput \| nu --testbin relay o> /dev/null` | 3508 | 8087 | 8136 |
# User-Facing Changes
- This is a breaking change for the plugin communication protocol,
because the `ExternalStreamInfo` was replaced with `ByteStreamInfo`.
Plugins now only have to deal with a single input stream, as opposed to
the previous three streams: stdout, stderr, and exit code.
- The output of `describe` has been changed for external/byte streams.
- Temporary breaking change: `bytes starts-with` no longer works with
byte streams. This is to keep the PR smaller, and `bytes ends-with`
already does not work on byte streams.
- If a process core dumped, then instead of having a `Value::Error` in
the `exit_code` column of the output returned from `complete`, it now is
a `Value::Int` with the negation of the signal number.
# After Submitting
- Update docs and book as necessary
- Release notes (e.g., plugin protocol changes)
- Adapt/convert commands to work with byte streams (high priority is
`str length`, `bytes starts-with`, and maybe `bytes ends-with`).
- Refactor the `tee` code, Devyn has already done some work on this.
---------
Co-authored-by: Devyn Cairns <devyn.cairns@gmail.com>
# Description
Fixes#12796 where a combined out and err pipe redirection (`o+e>|`)
into `complete` still provides separate `stdout` and `stderr` columns in
the record. Now, the combined output will be in the `stdout` column.
This PR also fixes a similar error with the `e>|` pipe redirection.
# Tests + Formatting
Added two tests.
This PR has two parts. The first part is the addition of the
`Stack::set_pwd()` API. It strips trailing slashes from paths for
convenience, but will reject otherwise bad paths, leaving PWD in a good
state. This should reduce the impact of faulty code incorrectly trying
to set PWD.
(https://github.com/nushell/nushell/pull/12760#issuecomment-2095393012)
The second part is implementing a PWD recovery mechanism. PWD can become
bad even when we did nothing wrong. For example, Unix allows you to
remove any directory when another process might still be using it, which
means PWD can just "disappear" under our nose. This PR makes it possible
to use `cd` to reset PWD into a good state. Here's a demonstration:
```sh
mkdir /tmp/foo
cd /tmp/foo
# delete "/tmp/foo" in a subshell, because Nushell is smart and refuse to delete PWD
nu -c 'cd /; rm -r /tmp/foo'
ls # Error: × $env.PWD points to a non-existent directory
# help: Use `cd` to reset $env.PWD into a good state
cd /
pwd # prints /
```
Also, auto-cd should be working again.
# Description
Adds subcommands to `sys` corresponding to each column of the record
returned by `sys`. This is to alleviate the fact that `sys` now returns
a regular record, meaning that it must compute every column which might
take a noticeable amount of time. The subcommands, on the other hand,
only need to compute and return a subset of the data which should be
much faster. In fact, it should be as fast as before, since this is how
the lazy record worked (it would compute only each column as necessary).
I choose to add subcommands instead of having an optional cell-path
parameter on `sys`, since the cell-path parameter would:
- increase the code complexity (can access any value at any row or
nested column)
- prevents discovery with tab-completion
- hinders type checking and allows users to pass potentially invalid
columns
# User-Facing Changes
Deprecates `sys` in favor of the new `sys` subcommands.
# Description
Fixes: #12429
To fix the issue, we need to pass the `input pattern` itself to
`glob_from` function, but currently on latest main, nushell pass
`expanded path of input pattern` to `glob_from` function.
It causes globbing failed if expanded path includes `[]` brackets.
It's a pity that I have to duplicate `nu_engine::glob_from` function
into `ls`, because `ls` might convert from `NuGlob::NotExpand` to
`NuGlob::Expand`, in that case, `nu_engine::glob_from` won't work if
user want to ls for a directory which includes tilde:
```
mkdir "~abc"
ls "~abc"
```
So I need to duplicate `glob_from` function and pass original
`expand_tilde` information.
# User-Facing Changes
Nan
# Tests + Formatting
Done
# After Submitting
Nan
# Description
Make typos config more strict: ignore false positives where they occur.
1. Ignore only files with typos
2. Add regexp-s with context
3. Ignore variable names only in Rust code
4. Ignore only 1 "identifier"
5. Check dot files
🎁 Extra bonus: fix typos!!
# Description
Bumps `base64` to 0.22.1 which fixes the alphabet used for binhex
encoding and decoding. This required updating some test expected output.
Related to PR #12469 where `base64` was also bumped and ran into the
failing tests.
# User-Facing Changes
Bug fix, but still changes binhex encoding and decoding output.
# Tests + Formatting
Updated test expected output.
# Description
Fixes#12758.
#12662 introduced a bug where calling `cd` with a path with a trailing
slash would cause `PWD` to be set to a path including a trailing slash,
which is not allowed. This adds a helper to `nu_path` to remove this,
and uses it in the `cd` command to clean it up before setting `PWD`.
# Tests + Formatting
I added some tests to make sure we don't regress on this in the future.
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# Description
Judiciously try to avoid allocations/clone by changing the signature of
functions
- **Don't pass str by value unnecessarily if only read**
- **Don't require a vec in `Sandbox::with_files`**
- **Remove unnecessary string clone**
- **Fixup unnecessary borrow**
- **Use `&str` in shape color instead**
- **Vec -> Slice**
- **Elide string clone**
- **Elide `Path` clone**
- **Take &str to elide clone in tests**
# User-Facing Changes
None
# Tests + Formatting
This touches many tests purely in changing from owned to borrowed/static
data
This is the first PR towards migrating to a new `$env.PWD` API that
returns potentially un-canonicalized paths. Refer to PR #12515 for
motivations.
## New API: `EngineState::cwd()`
The goal of the new API is to cover both parse-time and runtime use
case, and avoid unintentional misuse. It takes an `Option<Stack>` as
argument, which if supplied, will search for `$env.PWD` on the stack in
additional to the engine state. I think with this design, there's less
confusion over parse-time and runtime environments. If you have access
to a stack, just supply it; otherwise supply `None`.
## Deprecation of other PWD-related APIs
Other APIs are re-implemented using `EngineState::cwd()` and properly
documented. They're marked deprecated, but their behavior is unchanged.
Unused APIs are deleted, and code that accesses `$env.PWD` directly
without using an API is rewritten.
Deprecated APIs:
* `EngineState::current_work_dir()`
* `StateWorkingSet::get_cwd()`
* `env::current_dir()`
* `env::current_dir_str()`
* `env::current_dir_const()`
* `env::current_dir_str_const()`
Other changes:
* `EngineState::get_cwd()` (deleted)
* `StateWorkingSet::list_env()` (deleted)
* `repl::do_run_cmd()` (rewritten with `env::current_dir_str()`)
## `cd` and `pwd` now use logical paths by default
This pulls the changes from PR #12515. It's currently somewhat broken
because using non-canonicalized paths exposed a bug in our path
normalization logic (Issue #12602). Once that is fixed, this should
work.
## Future plans
This PR needs some tests. Which test helpers should I use, and where
should I put those tests?
I noticed that unquoted paths are expanded within `eval_filepath()` and
`eval_directory()` before they even reach the `cd` command. This means
every paths is expanded twice. Is this intended?
Once this PR lands, the plan is to review all usages of the deprecated
APIs and migrate them to `EngineState::cwd()`. In the meantime, these
usages are annotated with `#[allow(deprecated)]` to avoid breaking CI.
---------
Co-authored-by: Jakub Žádník <kubouch@gmail.com>
# Description
Removes lazy records from the language, following from the reasons
outlined in #12622. Namely, this should make semantics more clear and
will eliminate concerns regarding maintainability.
# User-Facing Changes
- Breaking change: `lazy make` is removed.
- Breaking change: `describe --collect-lazyrecords` flag is removed.
- `sys` and `debug info` now return regular records.
# After Submitting
- Update nushell book if necessary.
- Explore new `sys` and `debug info` APIs to prevent them from taking
too long (e.g., subcommands or taking an optional column/cell-path
argument).
# Description
This PR adds raw string support by using `r#` at the beginning of single
quoted strings and `#` at the end.
Notice that escapes do not process, even within single quotes,
parentheses don't mean anything, $variables don't mean anything. It's
just a string.
```nushell
❯ echo r#'one\ntwo (blah) ($var)'#
one\ntwo (blah) ($var)
```
Notice how they work without `echo` or `print` and how they work without
carriage returns.
```nushell
❯ r#'adsfa'#
adsfa
❯ r##"asdfa'@qpejq'##
asdfa'@qpejq
❯ r#'asdfasdfasf
∙ foqwejfqo@'23rfjqf'#
```
They also have a special configurable color in the repl. (use single
quotes though)
![image](https://github.com/nushell/nushell/assets/343840/8780e21d-de4c-45b3-9880-2425f5fe10ef)
They should work like rust raw literals and allow `r##`, `r###`,
`r####`, etc, to help with having one or many `#`'s in the middle of
your raw-string.
They should work with `let` as well.
```nushell
r#'some\nraw\nstring'# | str upcase
```
closes https://github.com/nushell/nushell/issues/5091
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A
clippy::needless_collect -A clippy::result_large_err` to check that
you're using the standard code style
- `cargo test --workspace` to check that all tests pass
- `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
---------
Co-authored-by: WindSoilder <WindSoilder@outlook.com>
Co-authored-by: Ian Manske <ian.manske@pm.me>
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx
you can also mention related issues, PRs or discussions!
-->
# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.
Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
Prior, it seemed that nested errors would not get detected and shown.
This PR fixes that.
Resolves#10176:
```
~/CodingProjects/nushell> [[1,2]] | each {|x| $x | each {|y| error make {msg: "oh noes"} } } 05/04/2024 21:34:08
Error: nu:🐚:eval_block_with_input
× Eval block failed with pipeline input
╭─[entry #1:1:3]
1 │ [[1,2]] | each {|x| $x | each {|y| error make {msg: "oh noes"} } }
· ┬
· ╰── source value
╰────
Error: × oh noes
╭─[entry #1:1:36]
1 │ [[1,2]] | each {|x| $x | each {|y| error make {msg: "oh noes"} } }
· ─────┬────
· ╰── originates from here
╰────
```
Resolves#11224:
```
~/CodingProjects/nushell> [0] | each { |_| 05/04/2024 21:35:40
::: [0] | each { |_|
::: non-existent-command
::: }
::: }
Error: nu:🐚:eval_block_with_input
× Eval block failed with pipeline input
╭─[entry #1:2:6]
1 │ [0] | each { |_|
2 │ [0] | each { |_|
· ┬
· ╰── source value
3 │ non-existent-command
╰────
Error: nu:🐚:external_command
× External command failed
╭─[entry #1:3:9]
2 │ [0] | each { |_|
3 │ non-existent-command
· ──────────┬─────────
· ╰── executable was not found
4 │ }
╰────
help: No such file or directory (os error 2)
```
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
# Description
Bandaid fix for #12643, where it is not possible to get the exit code of
a failed external command while also having the external command inherit
nushell's stdout and stderr. This changes `try` so that the exit code of
external command is available in the `catch` block via the usual
`$env.LAST_EXIT_CODE`.
# Tests + Formatting
Added one test.
# After Submitting
Rework I/O redirection and possibly exit codes.
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx
you can also mention related issues, PRs or discussions!
-->
# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.
Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
Resolves#12654.
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
`grid` can now throw an error.
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
Added relevant test.
# Description
I thought about bringing `nu_plugin_msgpack` in, but that is MPL with a
clause that prevents other licenses, so rather than adapt that code I
decided to take a crack at just doing it straight from `rmp` to `Value`
without any `rmpv` in the middle. It seems like it's probably faster,
though I can't say for sure how much with the plugin overhead.
@IanManske I started on a `Read` implementation for `RawStream` but just
specialized to `from msgpack` here, but I'm thinking after release maybe
we can polish it up and make it a real one. It works!
# User-Facing Changes
New commands:
- `from msgpack`
- `from msgpackz`
- `to msgpack`
- `to msgpackz`
# Tests + Formatting
Pretty thorough tests added for the format deserialization, with a
roundtrip for serialization. Some example tests too for both `from
msgpack` and `to msgpack`.
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
- [ ] update release notes
# Description
Adds a new keyword, `plugin use`. Unlike `register`, this merely loads
the signatures from the plugin cache file. The file is configurable with
the `--plugin-config` option either to `nu` or to `plugin use` itself,
just like the other `plugin` family of commands. At the REPL, one might
do this to replace `register`:
```nushell
> plugin add ~/.cargo/bin/nu_plugin_foo
> plugin use foo
```
This will not work in a script, because `plugin use` is a keyword and
`plugin add` does not evaluate at parse time (intentionally). This means
we no longer run random binaries during parse.
The `--plugins` option has been added to allow running `nu` with certain
plugins in one step. This is used especially for the `nu_with_plugins!`
test macro, but I'd imagine is generally useful. The only weird quirk is
that it has to be a list, and we don't really do this for any of our
other CLI args at the moment.
`register` now prints a deprecation parse warning.
This should fix#11923, as we now have a complete alternative to
`register`.
# User-Facing Changes
- Add `plugin use` command
- Deprecate `register`
- Add `--plugins` option to `nu` to replace a common use of `register`
# Tests + Formatting
I think I've tested it thoroughly enough and every existing test passes.
Testing nu CLI options and alternate config files is a little hairy and
I wish there were some more generic helpers for this, so this will go on
my TODO list for refactoring.
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
- [ ] Update plugins sections of book
- [ ] Release notes
# Description
When saving to a file we currently try to check if the data source in
the pipeline metadata is the same as the file we are saving to. If so,
we create an error, since reading and writing to a file at the same time
is currently not supported/handled gracefully. However, there are still
a few instances where this error is not properly triggered, and so this
PR attempts to reduce these cases. Inspired by #12599.
# Tests + Formatting
Added a few tests.
# After Submitting
Some commands still do not properly preserve metadata (e.g., `str trim`)
and so prevent us from detecting this error.
# Description
in order to change the style of the _serialized_ NUON data,
`nuon::to_nuon` takes three mutually exclusive arguments, `raw: bool`,
`tabs: Option<usize>` and `indent: Option<usize>` 🤔
this begs to use an enumeration with all possible alternatives, right?
this PR changes the signature of `nuon::to_nuon` to use `nuon::ToStyle`
which has three variants
- `Raw`: no newlines
- `Tabs(n: usize)`: newlines and `n` tabulations as indent
- `Spaces(n: usize)`: newlines and `n` spaces as indent
# User-Facing Changes
the signature of `nuon::to_nuon` changes from
```rust
to_nuon(
input: &Value,
raw: bool,
tabs: Option<usize>,
indent: Option<usize>,
span: Option<Span>,
) -> Result<String, ShellError>
```
to
```rust
to_nuon(
input: &Value,
style: ToStyle,
span: Option<Span>
) -> Result<String, ShellError>
```
# Tests + Formatting
# After Submitting
# Description
Close: #12514
# User-Facing Changes
`^ls | skip 1` will raise an error
```nushell
❯ ^ls | skip 1
Error: nu:🐚:only_supports_this_input_type
× Input type not supported.
╭─[entry #1:1:2]
1 │ ^ls | skip 1
· ─┬ ──┬─
· │ ╰── only list, binary or range input data is supported
· ╰── input type: raw data
╰────
```
# Tests + Formatting
Sorry I can't add it because of the issue:
https://github.com/nushell/nushell/issues/12558
# After Submitting
Nan
# Description
playing with the NUON format in Rust code in some plugins, we agreed
with the team it was a great time to create a standalone NUON format to
allow Rust devs to use this Nushell file format.
> **Note**
> this PR almost copy-pastes the code from
`nu_commands/src/formats/from/nuon.rs` and
`nu_commands/src/formats/to/nuon.rs` to `nuon/src/from.rs` and
`nuon/src/to.rs`, with minor tweaks to make then standalone functions,
e.g. remove the rest of the command implementations
### TODO
- [x] add tests
- [x] add documentation
# User-Facing Changes
devs will have access to a new crate, `nuon`, and two functions,
`from_nuon` and `to_nuon`
```rust
from_nuon(
input: &str,
span: Option<Span>,
) -> Result<Value, ShellError>
```
```rust
to_nuon(
input: &Value,
raw: bool,
tabs: Option<usize>,
indent: Option<usize>,
span: Option<Span>,
) -> Result<String, ShellError>
```
# Tests + Formatting
i've basically taken all the tests from
`crates/nu-command/tests/format_conversions/nuon.rs` and converted them
to use `from_nuon` and `to_nuon` instead of Nushell commands
- i've created a `nuon_end_to_end` to run both conversions with an
optional middle value to check that all is fine
> **Note**
> the `nuon::tests::read_code_should_fail_rather_than_panic` test does
give different results locally and in the CI...
> i've left it ignored with comments to help future us :)
# After Submitting
mention that in the release notes for sure!!
# Description
When a closure if provided to `group-by`, errors that occur in the
closure are currently ignored. That is, `group-by` will fall back and
use the `"error"` key if an error occurs. For example, the code snippet
below will group all `ls` entries under the `"error"` column.
```nushell
ls | group-by { get nope }
```
This PR changes `group-by` to instead bubble up any errors triggered
inside the closure. In addition, this PR also does some refactoring and
cleanup inside `group-by`.
# User-Facing Changes
Errors are now returned from the closure provided to `group-by` instead
of falling back to the `"error"` group/key.
# Description
Work for #7149
- **Error `with-env` given uneven count in list form**
- **Fix `with-env` `CantConvert` to record**
- **Error `with-env` when given protected env vars**
- **Deprecate list/table input of vars to `with-env`**
- **Remove examples for deprecated input**
# User-Facing Changes
## Deprecation of the following forms
```
> with-env [MYENV "my env value"] { $env.MYENV }
my env value
> with-env [X Y W Z] { $env.X }
Y
> with-env [[X W]; [Y Z]] { $env.W }
Z
```
## recommended standardized form
```
# Set by key-value record
> with-env {X: "Y", W: "Z"} { [$env.X $env.W] }
╭───┬───╮
│ 0 │ Y │
│ 1 │ Z │
╰───┴───╯
```
## (Side effect) Repeated definitions in an env shorthand are now
disallowed
```
> FOO=bar FOO=baz $env
Error: nu:🐚:column_defined_twice
× Record field or table column used twice: FOO
╭─[entry #1:1:1]
1 │ FOO=bar FOO=baz $env
· ─┬─ ─┬─
· │ ╰── field redefined here
· ╰── field first defined here
╰────
```
# Description
Close: #12147Close: #11796
About the change: it make pattern handling into a function:
`ls_for_one_pattern`(for ls), `du_for_one_pattern`(for du). Then
iterates on user input pattern, call these core function, and chaining
these iterator to one pipelinedata.
# Description
- Refactors `first` and `last` using `Vec::truncate` and `Vec::drain`.
- `std::mem::take` was also used to eliminate a few `Value` clones.
- The `NeedsPositiveValue` error now uses the span of the `rows`
argument instead of the call head span.
- `last` now errors on an empty stream to match `first` which does
error.
- Made metadata preservation more consistent.
# User-Facing Changes
Breaking change: `last` now errors on an empty stream to match `first`
which does error.
# Description
Fixes: #11996
After this change `let t = timeit ^ls` will list current directory to
stdout.
```
❯ let t = timeit ^ls
CODE_OF_CONDUCT.md Cargo.lock Cross.toml README.md aaa benches devdocs here11 scripts target toolkit.nu wix
CONTRIBUTING.md Cargo.toml LICENSE a.txt assets crates docker rust-toolchain.toml src tests typos.toml
```
If user don't want such behavior, he can redirect the stdout to `std
null-stream` easily
```
> use std
> let t = timeit { ^ls o> (std null-device) }
```
# User-Facing Changes
NaN
# Tests + Formatting
Done
# After Submitting
Nan
---------
Co-authored-by: Ian Manske <ian.manske@pm.me>
# Description
This is an attempt to isolate the unit tests from whatever might be in
the user's config. If the
user's config is broken in some way or incompatible with this version
(for example, especially if
there are plugins that aren't built for this version), tests can
spuriously fail.
This makes tests more reliably pass the same way they would on CI even
if the user has config, and
should also make them run faster.
I think this is _good enough_, but I still think we should have a
specific config dir env variable for nushell specifically (rather than
having to use `XDG_CONFIG_HOME`, which would mess with other things) and
then we can just have `nu-test-support` set that to a temporary dir
containing the shipped default config files.
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# Description
Currently, `Range` is a struct with a `from`, `to`, and `incr` field,
which are all type `Value`. This PR changes `Range` to be an enum over
`IntRange` and `FloatRange` for better type safety / stronger compile
time guarantees.
Fixes: #11778Fixes: #11777Fixes: #11776Fixes: #11775Fixes: #11774Fixes: #11773Fixes: #11769.
# User-Facing Changes
Hopefully none, besides bug fixes.
Although, the `serde` representation might have changed.
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx
you can also mention related issues, PRs or discussions!
-->
# Description
Resolves#11756.
Resolves#12346.
As per description, shell no longer hangs:
```
~/CodingProjects/nushell> [1 2 3] | select (-2)
Error: nu:🐚:cant_convert
× Can't convert to cell path.
╭─[entry #1:1:18]
1 │ [1 2 3] | select (-2)
· ──┬─
· ╰── can't convert negative number to cell path
╰────
```
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.
Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
Added relevant test 🚀
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
Possibly support `get` `get`ting negative numbers, as per #12346
discussion. Alternatively, we can consider adding a cellpath for
negative indexing?
# Description
This fixes#12391.
nushell/nushell@87c5f6e455 accidentally introduced a bug where the path
was not being properly
expanded according to the cwd. This makes both 'touch' and 'mkdir' use
globs just like the rest of
the commands to preserve tilde behavior while still expanding the paths
properly.
This doesn't actually expand the globs. Should it?
# User-Facing Changes
- Restore behavior of `mkdir`, `touch`
- Help text now says they can take globs, but they won't actually expand
them, maybe this should be changed
# Tests + Formatting
Regression tests added.
# After Submitting
This is severe enough and should be included in the point release.
# Description
Fixes how the directory permissions are calculated in `mkdir`. Instead
of subtraction, the umask is actually used as a mask via negation
followed by bitwise and with the default mode. This matches how [uucore
calculates](cac7155fba/src/uu/mkdir/src/mkdir.rs (L61))
the mode.
# Description
This pr is addressing feedback from
https://github.com/nushell/nushell/pull/12277#issuecomment-2027246752
Currently I think it's fine to replace `--legacy` flag with `--guess`
one. Only use `guess_width` algorithm if `--guess` is provided.
# User-Facing Changes
So it won't be a breaking change to previous version.
# Description
In #10232, the allowed input types were changed to be stricter, only
allowing records with types that can easily map onto sqlite equivalents.
Unfortunately, null was left out of the accepted input types, which
makes inserting rows with null values impossible.
This change fixes that by accepting null values as input.
One caveat of this is that when the command is creating a new table, it
uses the first row to infer an appropriate sqlite schema. If the first
row contains a null value, then it is impossible to tell which type this
column is supposed to have.
Throwing a hard error seems undesirable from a UX perspective, but
guessing can lead to a potentially useless database if we guess wrong.
So as a compromise, for null columns, we will assume the sqlite type is
TEXT and print a warning so the user knows. For the time being, if users
can't avoid a first row with null values, but also wants the right
schema, they are advised to create their table before running `into
sqlite`.
A future PR can add the ability to explicitly specify a schema.
Fixes#12225
# Tests + Formatting
* Tests added to cover expected behavior around insertion of null values
# Description
Closes https://github.com/nushell/nushell/issues/12257
This was down to the use of `eval_block` instead of
`eval_block_with_early_return`. We may want to reconsider how we
differentiate between this behavior. We currently need to check all the
remaining commands that can invoke a closure block, if they properly
handle `ShellError::Return` as a passing of a `Value`
- **Add test for `return` in `filter` closure**
- **Fix use of `return` in `filter` closure**
# User-Facing Changes
You can now return a value from a `filter` closure
# Tests + Formatting
Regression test
# Description
@fdncred found another histogram based algorithm to detect columns, and
rewrite it in rust: https://github.com/fdncred/guess-width
I have tested it manually, and it works good with `df`, `docker ps`,
`^ps`. This pr is going to use the algorithm in `detect columns`
Fix: #4183
The pitfall of new algorithm:
1. it may not works well if there isn't too much rows of input
2. it may not works well if the length of value is less than the header
to value, e.g:
```
c1 c2 c3 c4 c5
a b c d e
g h i j k
g a a q d
a v c q q | detect columns
```
In this case, users might need to use ~~`--old`~~ `--legacy` to make it
works well.
# User-Facing Changes
User might need to add ~~`--old`~~ `--legacy` to scripts if they find
`detect columns` in their scripts broken.
# Tests + Formatting
Done
# After Submitting
NaN
Hi,
This PR aims at implementing the first iteration for `uname` using
`uutils`. Couple of things:
* Currently my [PR](https://github.com/uutils/coreutils/pull/5921) to
make the required changes is pending in `uutils` repo.
* I guess the number of flags has to be investigated. Still the tests
cover all of them.
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx
you can also mention related issues, PRs or discussions!
-->
# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.
Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
# Tests + Formatting
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- [X] `cargo fmt --all -- --check` to check standard code formatting
(`cargo fmt --all` applies these changes)
- [X] `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used`
to check that you're using the standard code style
- [X] `cargo test --workspace` to check that all tests pass (on Windows
make sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- [X] `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
---------
Co-authored-by: Darren Schroeder <343840+fdncred@users.noreply.github.com>
# Description
Fixes: #11887Fixes: #11626
This pr unify the tilde expand behavior over several filesystem relative
commands. It follows the same rule with glob expansion:
| command | result |
| ----------- | ------ |
| ls ~/aaa | expand tilde
| ls "~/aaa" | don't expand tilde
| let f = "~/aaa"; ls $f | don't expand tilde, if you want to: use `ls
($f \| path expand)`
| let f: glob = "~/aaa"; ls $f | expand tilde, they don't expand on
`mkdir`, `touch` comamnd.
Actually I'm not sure for 4th item, currently it's expanding is just
because it followes the same rule with glob expansion.
### About the change
It changes `expand_path_with` to accept a new argument called
`expand_tilde`, if it's true, expand it, if not, just keep it as `~`
itself.
# User-Facing Changes
After this change, `ls "~/aaa"` won't expand tilde.
# Tests + Formatting
Done
# Description
This PR adds a `--params` param to `query db`. This closes#11643.
You can't combine both named and positional parameters, I think this
might be a limitation with rusqlite itself. I tried using named
parameters with indices like `{ ':named': 123, '1': "positional" }` but
that always failed with a rusqlite error. On the flip side, the other
way around works: for something like `VALUES (:named, ?)`, you can treat
both as positional: `-p [hello 123]`.
This PR introduces some very gnarly code repetition in
`prepared_statement_to_nu_list`. I tried, I swear; the compiler wasn't
having any of it, it kept telling me to box my closures and then it said
that the reference lifetimes were incompatible in the match arms. I gave
up and put the mapping code in the match itself, but I'm still not
happy.
Another thing I'm unhappy about: I don't like how you have to put the
`:colon` in named parameters. I think nushell should insert it if it's
[missing](https://www.sqlite.org/lang_expr.html#parameters). But this is
the way [rusqlite
works](https://docs.rs/rusqlite/latest/rusqlite/trait.Params.html#example-named),
so for now, I'll let it be consistent. Just know that it's not really a
blocker, and it isn't a compatibility change to later make `{ colon: 123
}` work, without the quotes and `:`. This would require allocating and
turning our pretty little `&str` into a `String`, though
# User-Facing Changes
Less incentive to leave yourself open to SQL injection with statements
like `query db $"INSERT INTO x VALUES \($unsafe_user_input)"`.
Additionally, the `$""` syntax being annoying with parentheses plays in
our favor, making users even more likely to use ? with `--params`.
# Tests + Formatting
Hehe
fixes#11900
# Description
Use `serde_json` instead.
# User-Facing Changes
The problem described in the issue now no longer persists.
No whitespace in the output of `to json --raw`
Output of unicode escape changed to consistent `\uffff`
# Tests + Formatting
I corrected all Tests that were affected by this change.
closes#12115
# Description
This fix addresses a bug where the --tabs flag couldn't be utilized due
to improper handling of the tab quantity provided by the user.
Previously, the code mistakenly attempted to convert the tab quantity to
a boolean value, leading to a conversion error. The resolution involves
adjusting the condition clauses to properly validate the presence of the
flag's value. Now, the code checks whether the get_flag() function
returns a value or None associated with the --tabs flag. This adjustment
enables the --tabs flag to function correctly, triggering the
appropriate condition and allowing the conversion to proceed as
expected. Similarly, the fix applies to the --indent flag. Additionally,
a default case was added, and the conversion now works properly without
flags. Two tests were added to validate the corrected behavior of these
flags.
# User-Facing Changes
Now the conversion should work properly instead of displaying an error.
# Tests + Formatting
-🟢 toolkit fmt
-🟢 toolkit clippy
-🟢 toolkit test
-🟢 toolkit test stdlib
To run added tests:
- cargo test --package nu-command --test main --
format_conversions::json::test_tabs_indent_flag
- cargo test --package nu-command --test main --
format_conversions::json::test_indent_flag
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx
you can also mention related issues, PRs or discussions!
-->
# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.
Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
With this change, `mkdir` mirrors coreutils works. Closes#12161
I referred to the implementation of `mkdir` in uutils/coreutils. I add
`uucore` required for implementation to dependencies. Since `uucore` is
already included in dependencies of `uu_mkdir`, I don't think there will
be any additional dependencies.
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
- Directories are created according to `umask` except for Windows.
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
I add `mkdir` test considering permissions. The test assumes that the
default `umask` is `022`.
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
# Description
Fixes#12193 where the `$in` value may be null for closures provided to
`insert`.
# User-Facing Changes
The `$in` value will now always be the same as the closure parameter for
`insert`.
# Description
The PR overhauls how IO redirection is handled, allowing more explicit
and fine-grain control over `stdout` and `stderr` output as well as more
efficient IO and piping.
To summarize the changes in this PR:
- Added a new `IoStream` type to indicate the intended destination for a
pipeline element's `stdout` and `stderr`.
- The `stdout` and `stderr` `IoStream`s are stored in the `Stack` and to
avoid adding 6 additional arguments to every eval function and
`Command::run`. The `stdout` and `stderr` streams can be temporarily
overwritten through functions on `Stack` and these functions will return
a guard that restores the original `stdout` and `stderr` when dropped.
- In the AST, redirections are now directly part of a `PipelineElement`
as a `Option<Redirection>` field instead of having multiple different
`PipelineElement` enum variants for each kind of redirection. This
required changes to the parser, mainly in `lite_parser.rs`.
- `Command`s can also set a `IoStream` override/redirection which will
apply to the previous command in the pipeline. This is used, for
example, in `ignore` to allow the previous external command to have its
stdout redirected to `Stdio::null()` at spawn time. In contrast, the
current implementation has to create an os pipe and manually consume the
output on nushell's side. File and pipe redirections (`o>`, `e>`, `e>|`,
etc.) have precedence over overrides from commands.
This PR improves piping and IO speed, partially addressing #10763. Using
the `throughput` command from that issue, this PR gives the following
speedup on my setup for the commands below:
| Command | Before (MB/s) | After (MB/s) | Bash (MB/s) |
| --------------------------- | -------------:| ------------:|
-----------:|
| `throughput o> /dev/null` | 1169 | 52938 | 54305 |
| `throughput \| ignore` | 840 | 55438 | N/A |
| `throughput \| null` | Error | 53617 | N/A |
| `throughput \| rg 'x'` | 1165 | 3049 | 3736 |
| `(throughput) \| rg 'x'` | 810 | 3085 | 3815 |
(Numbers above are the median samples for throughput)
This PR also paves the way to refactor our `ExternalStream` handling in
the various commands. For example, this PR already fixes the following
code:
```nushell
^sh -c 'echo -n "hello "; sleep 0; echo "world"' | find "hello world"
```
This returns an empty list on 0.90.1 and returns a highlighted "hello
world" on this PR.
Since the `stdout` and `stderr` `IoStream`s are available to commands
when they are run, then this unlocks the potential for more convenient
behavior. E.g., the `find` command can disable its ansi highlighting if
it detects that the output `IoStream` is not the terminal. Knowing the
output streams will also allow background job output to be redirected
more easily and efficiently.
# User-Facing Changes
- External commands returned from closures will be collected (in most
cases):
```nushell
1..2 | each {|_| nu -c "print a" }
```
This gives `["a", "a"]` on this PR, whereas this used to print "a\na\n"
and then return an empty list.
```nushell
1..2 | each {|_| nu -c "print -e a" }
```
This gives `["", ""]` and prints "a\na\n" to stderr, whereas this used
to return an empty list and print "a\na\n" to stderr.
- Trailing new lines are always trimmed for external commands when
piping into internal commands or collecting it as a value. (Failure to
decode the output as utf-8 will keep the trailing newline for the last
binary value.) In the current nushell version, the following three code
snippets differ only in parenthesis placement, but they all also have
different outputs:
1. `1..2 | each { ^echo a }`
```
a
a
╭────────────╮
│ empty list │
╰────────────╯
```
2. `1..2 | each { (^echo a) }`
```
╭───┬───╮
│ 0 │ a │
│ 1 │ a │
╰───┴───╯
```
3. `1..2 | (each { ^echo a })`
```
╭───┬───╮
│ 0 │ a │
│ │ │
│ 1 │ a │
│ │ │
╰───┴───╯
```
But in this PR, the above snippets will all have the same output:
```
╭───┬───╮
│ 0 │ a │
│ 1 │ a │
╰───┴───╯
```
- All existing flags on `run-external` are now deprecated.
- File redirections now apply to all commands inside a code block:
```nushell
(nu -c "print -e a"; nu -c "print -e b") e> test.out
```
This gives "a\nb\n" in `test.out` and prints nothing. The same result
would happen when printing to stdout and using a `o>` file redirection.
- External command output will (almost) never be ignored, and ignoring
output must be explicit now:
```nushell
(^echo a; ^echo b)
```
This prints "a\nb\n", whereas this used to print only "b\n". This only
applies to external commands; values and internal commands not in return
position will not print anything (e.g., `(echo a; echo b)` still only
prints "b").
- `complete` now always captures stderr (`do` is not necessary).
# After Submitting
The language guide and other documentation will need to be updated.
# Description
There are lots of duplicate test for `cp`, it's because we once have
`old-cp` command.
Today `old-cp` is removed, so there is no need to keep these tests.
# Description
Fixes some ignored clippy lints.
# User-Facing Changes
Changes some signatures and return types to `&dyn Command` instead of
`&Box<dyn Command`, but I believe this is only an internal change.
# Description
The intended effect of the `extra` feature has been undermined by
introducing the full builds on our release pages and having more
activity on some of the extra commands.
To simplify the feature matrix let's get rid of it and focus our effort
on truly either refining a command to well-specified behavior or
discarding it entirely from the `nu` binary and moving it into plugins.
## Details
- Remove `--features extra` from CI
- Don't explicitly name `extra` in full build wf
- Remove feature extra from build-help scripts
- Update README in `nu-cmd-extra`
- Remove feature `extra`
- Fix previously dead `format pattern` tests
- Relax signature of `to html`
- Fix/ignore `html::test_no_color_flag`
- Remove dead features from `version`
- Refine `to html` type signature
# User-Facing Changes
The commands that were previously only available when building with
`--features extra` will now be available to everyone. This increases the
number of dependencies slightly but has a limited impact on the overall
binary size.
# Tests + Formatting
Some tests that were left in `nu-command` during cratification were dead
because the feature was not passed to `nu-command` and only to
`nu-cmd-lang` for feature-flag mention in `version`.
Those tests have now been either fixed or ignored in one case.
# After Submitting
There may be places in the documentation where we point to `--features
extra` that will now be moot (apart from the generated command help)