Closes#13687Closes#13686
# Description
Light refactoring of `send_request `in `client.rs`. In the end there are
more lines but now the logic is more concise and facilitates adding new
conditions in the future. Unit tests ran fine and I tested a few cases
manually.
Cool project btw, I'll be using nushell from now on.
# Description
This changes the behavior of `tee` to be more transparent when given a
value that isn't a list or range. Previously, anything that wasn't a
byte stream would converted to a list stream using the iterator
implementation, which led to some surprising results. Instead, now, if
the value is a string or binary, it will be treated the same way a byte
stream is, and the output of `tee` is a byte stream instead of the
original value. This is done so that we can synchronize with the other
thread on collect, and potentially capture any error produced by the
closure.
For values that can't be converted to streams, the closure is just run
with a clone of the value instead on another thread. Because we can't
wait for the other thread, there is no way to send an error back to the
original thread, so instead it's just written to stderr using
`report_error_new()`.
There are a couple of follow up edge cases I see where byte streams
aren't necessarily treated exactly the same way strings are, but this
should mostly be a good experience.
Fixes#13489.
# User-Facing Changes
Breaking change.
- `tee` now outputs and sends string/binary stream for string/binary
input.
- `tee` now outputs and sends the original value for any other input
other than lists/ranges.
# Tests + Formatting
Added for new behavior.
# After Submitting
- [ ] release notes: breaking change, command change
Mistakes have been made. I forgot about a bunch of `todo`s in the helper
functions. So, this PR replaces them with proper errors. It also adds
tests for parse-time evaluation, because one `todo` I missed was in a
`run_const` function.
Based on the discussion in #13419.
## Description
Reworks the `decode`/`encode` commands by adding/changing the following
bases:
- `base32`
- `base32hex`
- `hex`
- `new-base64`
The `hex` base is compatible with the previous version of `hex` out of
the box (it only adds more flags). `base64` isn't, so the PR adds a new
version and deprecates the old one.
All commands have `string -> binary` signature for decoding and `string
| binary -> string` signature for encoding. A few `base64` encodings,
which are not a part of the
[RFC4648](https://datatracker.ietf.org/doc/html/rfc4648#section-6), have
been dropped.
## Example usage
```Nushell
~/fork/nushell> "string" | encode base32 | decode base32 | decode
string
```
```Nushell
~/fork/nushell> "ORSXG5A=" | decode base32
# `decode` always returns a binary value
Length: 4 (0x4) bytes | printable whitespace ascii_other non_ascii
00000000: 74 65 73 74 test
```
## User-Facing Changes
- New commands: `encode/decode base32/base32hex`.
- `encode hex` gets a `--lower` flag.
- `encode/decode base64` deprecated in favor of `encode/decode
new-base64`.
# Description
The meaning of the word usage is specific to describing how a command
function is *used* and not a synonym for general description. Usage can
be used to describe the SYNOPSIS or EXAMPLES sections of a man page
where the permitted argument combinations are shown or example *uses*
are given.
Let's not confuse people and call it what it is a description.
Our `help` command already creates its own *Usage* section based on the
available arguments and doesn't refer to the description with usage.
# User-Facing Changes
`help commands` and `scope commands` will now use `description` or
`extra_description`
`usage`-> `description`
`extra_usage` -> `extra_description`
Breaking change in the plugin protocol:
In the signature record communicated with the engine.
`usage`-> `description`
`extra_usage` -> `extra_description`
The same rename also takes place for the methods on
`SimplePluginCommand` and `PluginCommand`
# Tests + Formatting
- Updated plugin protocol specific changes
# After Submitting
- [ ] update plugin protocol doc
# Description
The previous behaviour of `into record` on lists was to create a new
record with each list index as the key. This was not very useful for
creating meaningful records, though, and most people would end up using
commands like `headers` or `transpose` to turn a list of keys and values
into a record.
This PR changes that instead to do what I think the most ergonomic thing
is, and instead:
- A list of records is merged into one record.
- A list of pairs (two element lists) is folded into a record with the
first element of each pair being the key, and the second being the
value.
The former is just generally more useful than having to use `reduce`
with `merge` for such a common operation, and the latter is useful
because it means that `$a | zip $b | into record` *just works* in the
way that seems most obvious.
Example:
```nushell
[[foo bar] [baz quux]] | into record # => {foo: bar, baz: quux}
[{foo: bar} {baz: quux}] | into record # => {foo: bar, baz: quux}
[foo baz] | zip [bar quux] | into record # => {foo: bar, baz: quux}
```
The support for range input has been removed, as it would no longer
reflect the treatment of an equivalent list.
The following is equivalent to the old behavior, in case that's desired:
```
0.. | zip [a b c] | into record # => {0: a, 1: b, 2: c}
```
# User-Facing Changes
- `into record` changed as described above (breaking)
- `into record` no longer supports range input (breaking)
# Tests + Formatting
Examples changed to match, everything works. Some usage in stdlib and
`nu_plugin_nu_example` had to be changed.
# After Submitting
- [ ] release notes (commands, breaking change)
# Description
Previously when nushell failed to parse the content type header, it
would emit an error instead of returning the response. Now it will fall
back to `text/plain` (which, in turn, will trigger type detection based
on file extension).
May fix (potentially) nushell/nushell#11927
Refs:
https://discord.com/channels/601130461678272522/614593951969574961/1272895236489613366
Supercedes: #13609
# User-Facing Changes
It's now possible to fetch content even if the server returns an invalid
content type header. Users may need to parse the response manually, but
it's still better than not getting the response at all.
# Tests + Formatting
Added a test for the new behaviour.
# After Submitting
# Description
Fixes: #13479
# User-Facing Changes
Given the following setup:
```
cd /tmp
touch src_file.txt
ln -s src_file.txt link1
```
### Before
```
ls -lf link1 | get target.0 # It outputs src_file.txt
```
### After
```
ls -lf link1 | get target.0 # It outputs /tmp/src_file.txt
```
# Tests + Formatting
Added a test for the change
# Description
Something I meant to add a long time ago. We currently don't have a
convenient way to print raw binary data intentionally. You can pipe it
through `cat` to turn it into an unknown stream, or write it to a file
and read it again, but we can't really just e.g. generate msgpack and
write it to stdout without this. For example:
```nushell
[abc def] | to msgpack | print --raw
```
This is useful for nushell scripts that will be piped into something
else. It also means that `nu_plugin_nu_example` probably doesn't need to
do this anymore, but I haven't adjusted it yet:
```nushell
def tell_nushell_encoding [] {
print -n "\u{0004}json"
}
```
This happens to work because 0x04 is a valid UTF-8 character, but it
wouldn't be possible if it were something above 0x80.
`--raw` also formats other things without `table`, I figured the two
things kind of go together. The output is kind of like `to text`.
Debatable whether that should share the same flag, but it was easier
that way and seemed reasonable.
# User-Facing Changes
- `print` new flag: `--raw`
# Tests + Formatting
Added tests.
# After Submitting
- [ ] release notes (command modified)
This PR closes [Issue
#13482](https://github.com/nushell/nushell/issues/13482)
# Description
This PR tend to make all math function to be constant.
# User-Facing Changes
The math commands now can be used as constant methods.
### Some Example
```
> const MODE = [3 3 9 12 12 15] | math mode
> $MODE
╭───┬────╮
│ 0 │ 3 │
│ 1 │ 12 │
╰───┴────╯
> const LOG = [16 8 4] | math log 2
> $LOG
╭───┬──────╮
│ 0 │ 4.00 │
│ 1 │ 3.00 │
│ 2 │ 2.00 │
╰───┴──────╯
> const VAR = [1 3 5] | math variance
> $VAR
2.6666666666666665
```
# Tests + Formatting
Tests are added for all of the math command to test there constant
behavior.
I mostly focused on the actual user experience, not the correctness of
the methods and algorithms.
# After Submitting
I think this change don't require any additional documentation. Feel
free to correct me in this topic please.
# Description
Attempt to guess the content type of a file when opening with --raw and
set it in the pipeline metadata.
<img width="644" alt="Screenshot 2024-08-02 at 11 30 10"
src="https://github.com/user-attachments/assets/071f0967-c4dd-405a-b8c8-f7aa073efa98">
# User-Facing Changes
- Content of files can be directly piped into commands like `http post`
with the content type set appropriately when using `--raw`.
# Description
Part 4 of replacing std::path types with nu_path types added in
https://github.com/nushell/nushell/pull/13115. This PR migrates various
tests throughout the code base.
- **Doccomment style fixes**
- **Forgotten stuff in `nu-pretty-hex`**
- **Don't `for` around an `Option`**
- and more
I think the suggestions here are a net positive, some of the suggestions
moved into #13498 feel somewhat arbitrary, I also raised
https://github.com/rust-lang/rust-clippy/issues/13188 as the nightly
`byte_char_slices` would require either a global allow or otherwise a
ton of granular allows or possibly confusing bytestring literals.
# Description
Fixes: #13260
When user run a command like this:
```nushell
$env.FOO = " New";
$env.BAZ = " New Err";
do -i {nu -n -c 'nu --testbin echo_env FOO; nu --testbin echo_env_stderr BAZ'} | save -a -r save_test_22/log.txt
```
`save` command sinks the output of previous commands' stderr output. I
think it should be `stderr`.
# User-Facing Changes
```nushell
$env.FOO = " New";
$env.BAZ = " New Err";
do -i {nu -n -c 'nu --testbin echo_env FOO; nu --testbin echo_env_stderr BAZ'} | save -a -r save_test_22/log.txt
```
The command will output ` New Err` to stderr.
# Tests + Formatting
Added 2 cases.
- **Suggested default impl for the new `*Stack`s**
- **Change a hashmap to make clippy happy**
- **Clone from fix**
- **Fix conditional unused in test**
- then **Bump rust toolchain**
# Description
This makes assignment operations and `const` behave the same way `let`
and `mut` do, absorbing the rest of the pipeline.
Changes the lexer to be able to recognize assignment operators as a
separate token, and then makes the lite parser continue to push spans
into the same command regardless of any redirections or pipes if an
assignment operator is encountered. Because the pipeline is no longer
split up by the lite parser at this point, it's trivial to just parse
the right hand side as if it were a subexpression not contained within
parentheses.
# User-Facing Changes
Big breaking change. These are all now possible:
```nushell
const path = 'a' | path join 'b'
mut x = 2
$x = random int
$x = [1 2 3] | math sum
$env.FOO = random chars
```
In the past, these would have led to (an attempt at) bare word string
parsing. So while `$env.FOO = bar` would have previously set the
environment variable `FOO` to the string `"bar"`, it now tries to run
the command named `bar`, hence the major breaking change.
However, this is desirable because it is very consistent - if you see
the `=`, you can just assume it absorbs everything else to the right of
it.
# Tests + Formatting
Added tests for the new behaviour. Adjusted some existing tests that
depended on the right hand side of assignments being parsed as
barewords.
# After Submitting
- [ ] release notes (breaking change!)
# Description
Close: #12083Close: #12084
# User-Facing Changes
It's a breaking change because we have switched the position of
`<initial>` and `<closure>`, after the change, initial value will be
optional. So it's possible to do something like this:
```nushell
> let f = {|fib = [0, 1]| {out: $fib.0, next: [$fib.1, ($fib.0 + $fib.1)]} }
> generate $f | first 5
╭───┬───╮
│ 0 │ 0 │
│ 1 │ 1 │
│ 2 │ 1 │
│ 3 │ 2 │
│ 4 │ 3 │
╰───┴───╯
```
It will also raise error if user don't give initial value, and the
closure don't have default parameter.
```nushell
❯ let f = {|fib| {out: $fib.0, next: [$fib.1, ($fib.0 + $fib.1)]} }
❯ generate $f
Error: × The initial value is missing
╭─[entry #5:1:1]
1 │ generate $f
· ────┬───
· ╰── Missing intial value
╰────
help: Provide <initial> value in generate, or assigning default value to closure parameter
```
# Tests + Formatting
Added some test cases.
---------
Co-authored-by: Stefan Holderbach <sholderbach@users.noreply.github.com>
# Description
Following from #13377, this PR refactors the related `window` command.
In the case where `window` should behave exactly like `chunks`, it now
reuses the same code that `chunks` does. Otherwise, the other cases have
been rewritten and have resulted in a performance increase:
| window size | stride | old time (ms) | new time (ms) |
| -----------:| ------:| -------------:| -------------:|
| 20 | 1 | 757 | 722 |
| 2 | 1 | 364 | 333 |
| 1 | 1 | 343 | 293 |
| 20 | 20 | 90 | 63 |
| 2 | 2 | 215 | 175 |
| 20 | 30 | 74 | 60 |
| 2 | 4 | 141 | 110 |
# User-Facing Changes
`window` will now error if the window size or stride is 0, which is
technically a breaking change.
# Description
This grew quite a bit beyond its original scope, but I've tried to make
`$in` a bit more consistent and easier to work with.
Instead of the parser generating calls to `collect` and creating
closures, this adds `Expr::Collect` which just evaluates in the same
scope and doesn't require any closure.
When `$in` is detected in an expression, it is replaced with a new
variable (also called `$in`) and wrapped in `Expr::Collect`. During
eval, this expression is evaluated directly, with the input and with
that new variable set to the collected value.
Other than being faster and less prone to gotchas, it also makes it
possible to typecheck the output of an expression containing `$in`,
which is nice. This is a breaking change though, because of the lack of
the closure and because now typechecking will actually happen. Also, I
haven't attempted to typecheck the input yet.
The IR generated now just looks like this:
```gas
collect %in
clone %tmp, %in
store-variable $in, %tmp
# %out <- ...expression... <- %in
drop-variable $in
```
(where `$in` is the local variable created for this collection, and not
`IN_VARIABLE_ID`)
which is a lot better than having to create a closure and call `collect
--keep-env`, dealing with all of the capture gathering and allocation
that entails. Ideally we can also detect whether that input is actually
needed, so maybe we don't have to clone, but I haven't tried to do that
yet. Theoretically now that the variable is a unique one every time, it
should be possible to give it a type - I just don't know how to
determine that yet.
On top of that, I've also reworked how `$in` works in pipeline-initial
position. Previously, it was a little bit inconsistent. For example,
this worked:
```nushell
> 3 | do { let x = $in; let y = $in; print $x $y }
3
3
```
However, this causes a runtime variable not found error on the second
`$in`:
```nushell
> def foo [] { let x = $in; let y = $in; print $x $y }; 3 | foo
Error: nu:🐚:variable_not_found
× Variable not found
╭─[entry #115:1:35]
1 │ def foo [] { let x = $in; let y = $in; print $x $y }; 3 | foo
· ─┬─
· ╰── variable not found
╰────
```
I've fixed this by making the first element `$in` detection *always*
happen at the block level, so if you use `$in` in pipeline-initial
position anywhere in a block, it will collect with an implicit
subexpression around the whole thing, and you can then use that `$in`
more than once. In doing this I also rewrote `parse_pipeline()` and
hopefully it's a bit more straightforward and possibly more efficient
too now.
Finally, I've tried to make `let` and `mut` a lot more straightforward
with how they handle the rest of the pipeline, and using a redirection
with `let`/`mut` now does what you'd expect if you assume that they
consume the whole pipeline - the redirection is just processed as
normal. These both work now:
```nushell
let x = ^foo err> err.txt
let y = ^foo out+err>| str length
```
It was previously possible to accomplish this with a subexpression, but
it just seemed like a weird gotcha that you couldn't do it. Intuitively,
`let` and `mut` just seem to take the whole line.
- closes#13137
# User-Facing Changes
- `$in` will behave more consistently with blocks and closures, since
the entire block is now just wrapped to handle it if it appears in the
first pipeline element
- `$in` no longer creates a closure, so what can be done within an
expression containing `$in` is less restrictive
- `$in` containing expressions are now type checked, rather than just
resulting in `any`. However, `$in` itself is still `any`, so this isn't
quite perfect yet
- Redirections are now allowed in `let` and `mut` and behave pretty much
how you'd expect
# Tests + Formatting
Added tests to cover the new behaviour.
# After Submitting
- [ ] release notes (definitely breaking change)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx
you can also mention related issues, PRs or discussions!
-->
# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.
Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
Replaces the `dirs_next` family of crates with `dirs`. `dirs_next` was
born when the `dirs` crates were abandoned three years ago, but they're
being maintained again and most projects depend on `dirs` nowadays.
`dirs_next` has been abandoned since.
This came up while working on
https://github.com/nushell/nushell/pull/13382.
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
None.
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
Tests and formatter have been run.
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
# Description
Before this change `default` would dive into lists and replace `null`
elements with the default value.
Therefore there was no easy way to process `null|list<null|string>`
input and receive `list<null|string>`
(IOW to apply the default on the top level only).
However it's trivially easy to apply default values to list elements
with the `each` command:
```nushell
[null, "a", null] | each { default "b" }
```
So there's no need to have this behavior in `default` command.
# User-Facing Changes
* `default` no longer dives into lists to replace `null` elements with
the default value.
# Tests + Formatting
Added a couple of tests for the new (and old) behavior.
# After Submitting
* Update docs.
# Description
The name of the `group` command is a little unclear/ambiguous.
Everything I look at it, I think of `group-by`. I think `chunks` more
clearly conveys what the `group` command does. Namely, it divides the
input list into chunks of a certain size. For example,
[`slice::chunks`](https://doc.rust-lang.org/std/primitive.slice.html#method.chunks)
has the same name. So, this PR adds a new `chunks` command to replace
the now deprecated `group` command.
The `chunks` command is a refactored version of `group`. As such, there
is a small performance improvement:
```nushell
# $data is a very large list
> bench { $data | chunks 2 } --rounds 30 | get mean
474ms 921µs 190ns
# deprecation warning was disabled here for fairness
> bench { $data | group 2 } --rounds 30 | get mean
592ms 702µs 440ns
> bench { $data | chunks 200 } --rounds 30 | get mean
374ms 188µs 318ns
> bench { $data | group 200 } --rounds 30 | get mean
481ms 264µs 869ns
> bench { $data | chunks 1 } --rounds 30 | get mean
642ms 574µs 42ns
> bench { $data | group 1 } --rounds 30 | get mean
981ms 602µs 513ns
```
# User-Facing Changes
- `group` command has been deprecated in favor of new `chunks` command.
- `chunks` errors when given a chunk size of `0` whereas `group` returns
chunks with one element.
# Tests + Formatting
Added tests for `chunks`, since `group` did not have any tests.
# After Submitting
Update book if necessary.
# Description
Fixes#13359
In an attempt to generate names for flat columns resulting from a nested
accesses #3016 generated new column names on nested selection, out of
convenience, that composed the cell path as a string (including `.`) and
then simply replaced all `.` with `_`. As we permit `.` in column names
as long as you quote this surprisingly alters `select`ed columns.
# User-Facing Changes
New columns generated by selection with nested cell paths will for now
be named with a string containing the keys separated by `.` instead of
`_`. We may want to reconsider the semantics for nested access.
# Tests + Formatting
- Alter test to breaking change on nested `select`
Somehow this logic was missed on my end. ( I mean I was not even
thinking about it in original patch 😄 )
Please recheck
Added a regression test too.
close#13336
cc: @fdncred
GOOD CATCH.............................................................
SORRY
I've added a test to catch regression just in case.
close#13319
cc: @fdncred
# Description
Bug fix: `PipelineData::check_external_failed()` was not preserving the
original `type_` and `known_size` attributes of the stream passed in for
streams that come from children, so `external-command | into binary` did
not work properly and always ended up still being unknown type.
# User-Facing Changes
The following test case now works as expected:
```nushell
> head -c 2 /dev/urandom | into binary
# Expected: pretty hex dump of binary
# Previous behavior: just raw binary in the terminal
```
# Tests + Formatting
Added a test to cover this to `into binary`
# Description
Provides the ability to use http commands as part of a pipeline.
Additionally, this pull requests extends the pipeline metadata to add a
content_type field. The content_type metadata field allows commands such
as `to json` to set the metadata in the pipeline allowing the http
commands to use it when making requests.
This pull request also introduces the ability to directly stream http
requests from streaming pipelines.
One other small change is that Content-Type will always be set if it is
passed in to the http commands, either indirectly or throw the content
type flag. Previously it was not preserved with requests that were not
of type json or form data.
# User-Facing Changes
* `http post`, `http put`, `http patch`, `http delete` can be used as
part of a pipeline
* `to text`, `to json`, `from json` all set the content_type metadata
field and the http commands will utilize them when making requests.
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx
you can also mention related issues, PRs or discussions!
-->
fixes#13245
# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.
Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
In addition to addressing #13245, this PR also updated one of the doc
example to the `find` command to document that non-regex mode is case
insensitive, which may surprise some users.
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
---------
Co-authored-by: Ben Yang <ben@ya.ng>
Co-authored-by: Darren Schroeder <343840+fdncred@users.noreply.github.com>
# Description
fixed#11678
The sub-commands of from command (`from {csv, tsv, ssv}`) name columns
starting from index 0.
This behaviour is inconsistent with other commands such as `detect
columns`.
This PR makes the subcommands index 0-based.
# User-Facing Changes
The subcommands (`from {csv, tsv, ssv}`) return a table with the columns
starting at index 0 if no header data is passed.
```
~/Development/nushell> "foo bar baz" | from ssv -n -m 1
╭───┬─────────┬─────────┬─────────╮
│ # │ column0 │ column1 │ column2 │
├───┼─────────┼─────────┼─────────┤
│ 0 │ foo │ bar │ baz │
╰───┴─────────┴─────────┴─────────╯
~/Development/nushell> "foo,bar,baz" | from csv -n
╭───┬─────────┬─────────┬─────────╮
│ # │ column0 │ column1 │ column2 │
├───┼─────────┼─────────┼─────────┤
│ 0 │ foo │ bar │ baz │
╰───┴─────────┴─────────┴─────────╯
~/Development/nushell> "foo\tbar\tbaz" | from tsv -n
╭───┬─────────┬─────────┬─────────╮
│ # │ column0 │ column1 │ column2 │
├───┼─────────┼─────────┼─────────┤
│ 0 │ foo │ bar │ baz │
╰───┴─────────┴─────────┴─────────╯
```
# Tests + Formatting
When I ran tests, `commands::touch::change_file_mtime_to_reference`
failed with the following error.
The error also occurs in the master branch, so it's probably unrelated
to these changes.
(maybe a problem with my dev environment)
```
$ toolkit check pr
~~~~~~~~
failures:
---- commands::touch::change_file_mtime_to_reference stdout ----
=== stderr
thread 'commands::touch::change_file_mtime_to_reference' panicked at crates/nu-command/tests/commands/touch.rs:298:9:
assertion `left == right` failed
left: SystemTime { tv_sec: 1719149697, tv_nsec: 57576929 }
right: SystemTime { tv_sec: 1719149697, tv_nsec: 78219489 }
failures:
commands::touch::change_file_mtime_to_reference
test result: FAILED. 1533 passed; 1 failed; 32 ignored; 0 measured; 0 filtered out; finished in 10.87s
error: test failed, to rerun pass `-p nu-command --test main`
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🔴 `toolkit test`
- ⚫ `toolkit test stdlib`
```
# After Submitting
nothing
# Description
@hustcer reported that slashes were disappearing from external args
since #13089:
```
$> ossutil ls oss://abc/b/c
Error: invalid cloud url: "oss:/abc/b/c", please make sure the url starts with: "oss://"
$> ossutil ls 'oss://abc/b/c'
Error: oss: service returned error: StatusCode=403, ErrorCode=UserDisable, ErrorMessage="UserDisable", RequestId=66791EDEFE87B73537120838, Ec=0003-00000801, Bucket=abc, Object=
```
I narrowed this down to the ndots handling, since that does path parsing
and path reconstruction in every case. I decided to change that so that
it only activates if the string contains at least `...`, since that
would be the minimum trigger for ndots, and also to not activate it if
the string contains `://`, since it's probably undesirable for a URL.
Kind of a hack, but I'm not really sure how else we decide whether
someone wants ndots or not.
# User-Facing Changes
- bare strings not containing ndots are not modified
- bare strings containing `://` are not modified
# Tests + Formatting
Added tests to prevent regression.
# Description
* As discussed in the comments in #11954, this suppresses the index
column on `cal` output. It does that by running `table -i false` on the
results by default.
* Added new `--as-table/-t` flag to revert to the old behavior and
output the calendar as structured data
* Updated existing tests to use `--as-table`
* Added new tests against the string output
* Updated `length` test which also used `cal`
* Added new example for `--as-table`, with result
# User-Facing Changes
## Breaking change
The *default* `cal` output has changed from a `list` to a `string`. To
obtain structured data from `cal`, use the new `--as-table/-t` flag.
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
# Description
We've had a lot of different issues and PRs related to arg handling with
externals since the rewrite of `run-external` in #12921:
- #12950
- #12955
- #13000
- #13001
- #13021
- #13027
- #13028
- #13073
Many of these are caused by the argument handling of external calls and
`run-external` being very special and involving the parser handing
quoted strings over to `run-external` so that it knows whether to expand
tildes and globs and so on. This is really unusual and also makes it
harder to use `run-external`, and also harder to understand it (and
probably is part of the reason why it was rewritten in the first place).
This PR moves a lot more of that work over to the parser, so that by the
time `run-external` gets it, it's dealing with much more normal Nushell
values. In particular:
- Unquoted strings are handled as globs with no expand
- The unescaped-but-quoted handling of strings was removed, and the
parser constructs normal looking strings instead, removing internal
quotes so that `run-external` doesn't have to do it
- Bare word interpolation is now supported and expansion is done in this
case
- Expressions typed as `Glob` containing `Expr::StringInterpolation` now
produce `Value::Glob` instead, with the quoted status from the expr
passed through so we know if it was a bare word
- Bare word interpolation for values typed as `glob` now possible, but
not implemented
- Because expansion is now triggered by `Value::Glob(_, false)` instead
of looking at the expr, externals now support glob types
# User-Facing Changes
- Bare word interpolation works for external command options, and
otherwise embedded in other strings:
```nushell
^echo --foo=(2 + 2) # prints --foo=4
^echo -foo=$"(2 + 2)" # prints -foo=4
^echo foo="(2 + 2)" # prints (no interpolation!) foo=(2 + 2)
^echo foo,(2 + 2),bar # prints foo,4,bar
```
- Bare word interpolation expands for external command head/args:
```nushell
let name = "exa"
~/.cargo/bin/($name) # this works, and expands the tilde
^$"~/.cargo/bin/($name)" # this doesn't expand the tilde
^echo ~/($name)/* # this glob is expanded
^echo $"~/($name)/*" # this isn't expanded
```
- Ndots are now supported for the head of an external command
(`^.../foo` works)
- Glob values are now supported for head/args of an external command,
and expanded appropriately:
```nushell
^("~/.cargo/bin/exa" | into glob) # the tilde is expanded
^echo ("*.txt" | into glob) # this glob is expanded
```
- `run-external` now works more like any other command, without
expecting a special call convention
for its args:
```nushell
run-external echo "'foo'"
# before PR: 'foo'
# after PR: foo
run-external echo "*.txt"
# before PR: (glob is expanded)
# after PR: *.txt
```
# Tests + Formatting
Lots of tests added and cleaned up. Some tests that weren't active on
Windows changed to use `nu --testbin cococo` so that they can work.
Added a test for Linux only to make sure tilde expansion of commands
works, because changing `HOME` there causes `~` to reliably change.
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
- [ ] release notes: make sure to mention the new syntaxes that are
supported
# Description
This fixes issues with trying to run the tests with a terminal that is
small enough to cause errors to wrap around, or in cases where the test
environment might produce strings that are reasonably expected to wrap
around anyway. "Fancy" errors are too fancy for tests to work
predictably 😉
cc @abusch
# User-Facing Changes
- Added `--error-style` option for use with `--commands` (like
`--table-mode`)
# Tests + Formatting
Surprisingly, all of the tests pass, including in small windows! I only
had to make one change to a test for `error make` which was looking for
the box drawing characters miette uses to determine whether the span
label was showing up - but the plain error style output is even better
and easier to match on, so this test is actually more specific now.
# Description
Fixes: #13105Fixes: #13077
This pr makes `str substring`, `bytes at` work better with negative
index.
And it also fixes the false range semantic on `detect columns -c` in
some cases.
# User-Facing Changes
For `str substring`, `bytes at`, it will no-longer return an error if
start index is larger than end index. It makes sense to return an empty
string of empty bytes directly.
### Before
```nushell
# str substring
❯ ("aaa" | str substring 2..-3) == ""
Error: nu:🐚:type_mismatch
× Type mismatch.
╭─[entry #23:1:10]
1 │ ("aaa" | str substring 2..-3) == ""
· ──────┬──────
· ╰── End must be greater than or equal to Start
2 │ true
╰────
# bytes at
❯ ("aaa" | encode utf-8 | bytes at 2..-3) == ("" | encode utf-8)
Error: nu:🐚:type_mismatch
× Type mismatch.
╭─[entry #27:1:25]
1 │ ("aaa" | encode utf-8 | bytes at 2..-3) == ("" | encode utf-8)
· ────┬───
· ╰── End must be greater than or equal to Start
╰────
```
### After
```nushell
# str substring
❯ ("aaa" | str substring 2..-3) == ""
true
# bytes at
❯ ("aaa" | encode utf-8 | bytes at 2..-3) == ("" | encode utf-8)
true
```
# Tests + Formatting
Added some tests, adjust existing tests
# Description
Removes the `which-support` cargo feature and makes all of its
feature-gated code enabled by default in all builds. I'm not sure why
this one command is gated behind a feature. It seems to be a relic of
older code where we had features for what seems like every command.
# Description
This PR updates the uutils/coreutils crates to the latest released
version.
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx
you can also mention related issues, PRs or discussions!
-->
# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.
Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
Part of https://github.com/nushell/nushell/issues/12963, step 2.
This PR refactors changes the use of `expression.span` to
`expression.span_id` via a new helper `Expression::span()`. A new
`GetSpan` is added to abstract getting the span from both `EngineState`
and `StateWorkingSet`.
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
`format pattern` loses the ability to use variables in the pattern,
e.g., `... | format pattern 'value of {$it.name} is {$it.value}'`. This
is because the command did a custom parse-eval cycle, creating spans
that are not merged into the main engine state. We could clone the
engine state, add Clone trait to StateDelta and merge the cloned delta
to the cloned state, but IMO there is not much value from having this
ability, since we have string interpolation nowadays: `... | $"value of
($in.name) is ($in.value)"`.
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the
tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
# Description
Enable the `preserve_order` feature of the `toml` crate to preserve the
ordering of elements when converting from/to toml.
Additionally, use `to_string_pretty()` instead of `to_string()` in `to
toml`. This displays arrays on multiple lines instead of one big single
line. I'm not sure if this one is a good idea or not... Happy to remove
this from this PR if it's not.
# User-Facing Changes
The order of elements will be different when using `from toml`. The
formatting of arrays will also be different when using `to toml`. For
example:
- before
```
❯ "foo=1\nbar=2\ndoo=3" | from toml
╭─────┬───╮
│ bar │ 2 │
│ doo │ 3 │
│ foo │ 1 │
╰─────┴───╯
❯ {a: [a b c d]} | to toml
a = ["a", "b", "c", "d"]
```
- after
```
❯ "foo=1\nbar=2\ndoo=3" | from toml
╭─────┬───╮
│ foo │ 1 │
│ bar │ 2 │
│ doo │ 3 │
╰─────┴───╯
❯ {a: [a b c d]} | to toml
a = [
"a",
"b",
"c",
"d",
]
```
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🔴 `toolkit test`
- ⚫ `toolkit test stdlib`
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
# Description
Fixes https://github.com/nushell/nushell/issues/12968. After apply this
patch, we can use explict plus sign character included string with `into
filesize` cmd.
# User-Facing Changes
AS-IS (before fixing)
```
$ "+8 KiB" | into filesize
Error: nu:🐚:cant_convert
× Can't convert to int.
╭─[entry #31:1:1]
1 │ "+8 KiB" | into filesize
· ────┬───
· ╰── can't convert string to int
╰────
```
TO-BE (after fixing)
```
$ "+8KiB" | into filesize
8.0 KiB
```
# Tests + Formatting
Added a test
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
This PR fixes the `path type` command so that it resolves relative paths
using PWD from the engine state.
As a bonus, it also fixes the issue of `path type` returning an empty
string instead of an error when it fails.
# Description
Makes the `from json --objects` command produce a stream, and read
lazily from an input stream to produce its output.
Also added a helper, `PipelineData::get_type()`, to make it easier to
construct a wrong type error message when matching on `PipelineData`. I
expect checking `PipelineData` for either a string value or an `Unknown`
or `String` typed `ByteStream` will be very, very common. I would have
liked to have a helper that just returns a readable stream from either,
but that would either be a bespoke enum or a `Box<dyn BufRead>`, which
feels like it wouldn't be so great for performance. So instead, taking
the approach I did here is probably better - having a function that
accepts the `impl BufRead` and matching to use it.
# User-Facing Changes
- `from json --objects` no longer collects its input, and can be used
for large datasets or streams that produce values over time.
# Tests + Formatting
All passing.
# After Submitting
- [ ] release notes
---------
Co-authored-by: Ian Manske <ian.manske@pm.me>
# Description
Fixes: https://github.com/nushell/nushell/issues/7761
It's still unsure if we want to change the `range semantic` itself, but
it's good to keep range semantic consistent between nushell commands.
# User-Facing Changes
### Before
```nushell
❯ "abc" | str substring 1..=2
b
```
### After
```nushell
❯ "abc" | str substring 1..=2
bc
```
# Tests + Formatting
Adjust tests to fit new behavior
# Description
Removes the old `nu-cmd-dataframe` crate in favor of the polars plugin.
As such, this PR also removes the `dataframe` feature, related CI, and
full releases of nushell.