* Resolve rebase artifacts
* Remove leftover dependencies on removed feature
* Remove unnecessary 'pub'
* Start taking notes and fooling around
* Split canonicalize to two versions; Add TODOs
One that takes `relative_to` and one that doesn't.
More TODO notes.
* Merge absolutize to and rename resolve_dots
* Add custom absolutize fn and use it in path expand
* Convert a couple of dunce::canonicalize to ours
* Update nu-path description
* Replace all canonicalize with nu-path version
* Remove leftover dunce dependencies
* Fix broken autocd with trailing slash
Trailing slash is preserved *only* in paths that do not contain "." or
"..". This should be fixed in the future to cover all paths but for now
it at least covers basic cases.
* Use dunce::canonicalize for canonicalizing
* Alow cd recovery from non-existent cwd
* Disable removed canonicalize functionality tests
Remove unused import
* Break down nu-path into separate modules
* Remove unused public imports
* Remove abundant cow mapping
* Fix clippy warning
* Reformulate old canonicalize tests to expand_path
They wouldn't work with the new canonicalize.
* Canonicalize also ~ and ndots; Unify path joining
Also, add doc comments in nu_path::expansions.
* Add comment
* Avoid expanding ndots if path is not valid UTF-8
With this change, no lossy path->string conversion should happen in the
nu-path crate.
* Fmt
* Slight expand_tilde refactor; Add doc comments
* Start nu-path integration tests
* Add tests TODO
* Fix docstring typo
* Fix some doc strings
* Add README for nu-path crate
* Add a couple of canonicalize tests
* Add nu-path integration tests
* Add trim trailing slashes tests
* Update nu-path dependency
* Remove unused import
* Regenerate lockfile
* chore: Replace surf with reqwest
Removes a lot of older, duplication versions of some dependencies
(roughtly 90 dependencies removed in total)
* chore: Remove syn 0.11
* chore: Remove unnecessary features from ptree
Removes some more duplicate dependencies
* cargo update
* Ensure we run the fetch and post plugins on the tokio runtime
* Fix clippy warning
* fix: Github requires a user agent on requests
Co-authored-by: Darren Schroeder <343840+fdncred@users.noreply.github.com>
* Allow different names for ...rest
* Resolves#3945
* This change requires an explicit name for the rest argument in `WholeStreamCommand`,
which is why there are so many changed files.
* Remove redundant clone
* Add tests
* Allow environment variables to be hidden
This change allows environment variables in Nushell to have a value of
`Nothing`, which can be set by the user by passing `$nothing` to
`let-env` and friends.
Environment variables with a value of Nothing behave as if they are not
set at all. This allows a user to shadow the value of an environment
variable in a parent scope, effectively removing it from their current
scope. This was not possible before, because a scope can not affect its
parent scopes.
This is a workaround for issues like #3920.
Additionally, this allows a user to simultaneously set, change and
remove multiple environment variables via `load-env`. Any environment
variables set to $nothing will be hidden and thus act as if they are
removed. This simplifies working with virtual environments, which rely
on setting multiple environment variables, including PATH, to specific
values, and remove/change them on deactivation.
One surprising behavior is that an environment variable set to $nothing
will act as if it is not set when querying it (via $nu.env.X), but it is
still possible to remove it entirely via `unlet-env`. If the same
environment variable is present in the parent scope, the value in the
parent scope will be visible to the user. This might be surprising
behavior to users who are not familiar with the implementation details.
An additional corner case is the the shorthand form of `with-env` does
not work with this feature. Using `X=$nothing` will set $nu.env.X to the
string "$nothing". The long-form works as expected: `with-env [X
$nothing] {...}`.
* Remove unused import
* Allow all primitives to be convert to strings
Given we can write nu scripts. As the codebase grows, splitting into many smaller nu scripts is necessary.
In general, when we work with paths and files we seem to face quite a few difficulties. Here we just tackle one of them and it involves sourcing
files that also source other nu files and so forth. The current working directory becomes important here and being on a different directory
when sourcing scripts will not work. Mostly because we expand the path on the current working directory and parse the files when a source command
call is done.
For the moment, we introduce a `lib_dirs` configuration value and, unfortunately, introduce a new dependency in `nu-parser` (`nu-data`) to get
a handle of the configuration file to retrieve it. This should give clues and ideas as the new parser engine continues (introduce a way to also know paths)
With this PR we can do the following:
Let's assume we want to write a nu library called `my_library`. We will have the code in a directory called `project`: The file structure will looks like this:
```
project/my_library.nu
project/my_library/hello.nu
project/my_library/name.nu
```
This "pattern" works well, that is, when creating a library have a directory named `my_library` and next to it a `my_library.nu` file. Filling them like this:
```
source my_library/hello.nu
source my_library/name.nu
```
```
def hello [] {
"hello world"
}
```
```
def name [] {
"Nu"
end
```
Assuming this `project` directory is stored at `/path/to/lib/project`, we can do:
```
config set lib_dirs ['path/to/lib/project']
```
Given we have this `lib_dirs` configuration value, we can be anywhere while using Nu and do the following:
```
source my_library.nu
echo (hello) (name)
```
* Allow sourcing paths with emojis
* Add source command tests for emoji paths
* Fmt
* Disable source tests on Windows with illegal paths
* Test sourcing also ASCII and single-quoted paths
We introduce it here and allow it to work with regular lists (tables with no columns) as well as symmetric tables. Say we have two lists and wish to zip them, like so:
```
[0 2 4 6 8] | zip {
[1 3 5 7 9]
} | flatten
───┬───
0 │ 0
1 │ 1
2 │ 2
3 │ 3
4 │ 4
5 │ 5
6 │ 6
7 │ 7
8 │ 8
9 │ 9
───┴───
```
In the case for two tables instead:
```
[[symbol]; ['('] ['['] ['{']] | zip {
[[symbol]; [')'] [']'] ['}']]
} | each {
get symbol | $'($in.0)nushell($in.1)'
}
───┬───────────
0 │ (nushell)
1 │ [nushell]
2 │ {nushell}
───┴───────────
```
* Mitigate history file bug in Rustyline
Rustyline's duplicate ignoring code has a bug that can cause data loss and
history file corruption. Testing seems to indicate that disabling this behavior
and allowing duplicates will prevent the bug from showing up. Many people have
complained about this issue, I think it is worthwhile to fix the bug at the cost
of permitting duplicate history entries.
Upstream bug: https://github.com/kkawakam/rustyline/issues/559
* Increase Rustyline historyfile limit
Rustyline will only store 100 history items by default. This is quite a small
limit for a shell that people use as a daily driver. Especially when the
deduplication code is removed, we will hit that limit quickly and start to lose
history. This commit bumps the limit up to 10k. We can discuss if this is an
inappropriate limit or if we should allow users to specify this setting in their
nushell config file instead.
We very well support `nth 0 2 3 --skip 1 4` to select particular rows and skip some using a flag. However, in practice we deal with tables (whether they come from parsing or loading files and whatnot) where we don't know the size of the table up front (and everytime we have these, they may have different sizes). There are also other use cases when we use intermediate tables during processing and wish to always drop certain rows and **keep the rest**.
Usage:
```
... | drop nth 0
... | drop nth 3 8
```
The nu-serde crate allows us to become much more generic with respect to how we
convert output to `nu-protocol::Value`s. This allows us to remove a lot of the
special-case code that we wrote for deserializing JSON values.
Co-authored-by: Darren Schroeder <343840+fdncred@users.noreply.github.com>
* Add the nu-serde crate
nu-serde is a crate that can be used to turn a value implementing
`serde::Serialize` into a `nu-protocol::Value`. This has the potential to
significantly simplify plugin authorship.
This crate was the previously independent
[serde-nu](https://github.com/lily-mara/serde-nu) but the nushell maintainers
expressed an interest in having it added to the mainline nushell repository.
* fixup! Add the nu-serde crate
Some environment variables, such as `RUST_LOG` include equals signs. Nushell
should support this in the shorthand environment variable syntax so that
developers using these variables can control them easily. We accomplish this by
swapping `std::str::split` for `std::str::splitn`, which ensures that we only
consider the first equals sign in the string instead of all of them, which we
did previously.
Closes#3867
* Adding parse fix for power operator error on negative integers and decimal
* Adding correct formatting
* Changed is negative check to follow conventions
* Adding tests
* Added fix for the negatives and added tests
* Removed comments
* nuframe in its own type in UntaggedValue
* Removed eager dataframe from enum
* Dataframe created from list of values
* Corrected order in dataframe columns
* Returned tag from stream collection
* Removed series from dataframe commands
* Arithmetic operators
* forced push
* forced push
* Replace all command
* String commands
* appending operations with dfs
* Testing suite for dataframes
* Unit test for dataframe commands
* improved equality for dataframes
* moving all dataframe operations to protocol
* objects in dataframes
* Removed cloning when converting to row
* Refactor Hash code to simplify md5 and sha256 implementations
Md5 and Sha256 (and other future digests) require less boilerplate code
now. Error reporting includues the name of the hash again.
* Add missing hash sha256 test
This brings the features used by the `nu_plugin_fetch` and
`nu_plugin_post` in line and drops the default-features, reducing
the number of pulled-in dependencies and avoiding a second round of
compilations.
Retry of #3777 but with different features, post and fetch plugin tested
Signed-off-by: Daniel Egger <daniel@eggers-club.de>
Select used to ignore absent values resulting in "squashing" where
columns for multiple rows could be squashed into a single row.
This fixes the problem by inserting null/nothing into absent value, thus
preserving the structure of rows. This makes sure that all selected
columns are present in each row.
* nuframe in its own type in UntaggedValue
* Removed eager dataframe from enum
* Dataframe created from list of values
* Corrected order in dataframe columns
* Returned tag from stream collection
* Removed series from dataframe commands
* Arithmetic operators
* forced push
* forced push
* Replace all command
* String commands
* appending operations with dfs
* Testing suite for dataframes
* Unit test for dataframe commands
* improved equality for dataframes
* moving all dataframe operations to protocol
Hashers now uses on Rust Crypto Digest trait which makes it trivial to
implement additional hash functions.
The original `md5` crate does not implement the Digest trait and was
replaced by `md-5` crate which does. Sha256 uses already included `sha2`
crate.
* nuframe in its own type in UntaggedValue
* Removed eager dataframe from enum
* Dataframe created from list of values
* Corrected order in dataframe columns
* Returned tag from stream collection
* Removed series from dataframe commands
* Arithmetic operators
* forced push
* forced push
* Replace all command
* String commands
* appending operations with dfs
* Testing suite for dataframes
* Unit test for dataframe commands
* improved equality for dataframes
* Read from standard input in `rm`
With this change, rm falls back to reading from the standard input if no
arguments are supplied. This leads to more intuitive pipes as seen in
https://github.com/nushell/nushell/issues/2824
ls | get name | sort-by | first 20 | each {rm $it} becomes
ls | get name | sort-by | first 20 | rm
* [Fix] Run cargo fmt, make files a rest parameter, and fix cargo build warnings
* [Fix] Fix clippy suggestions
* Fix swapped PATH env var separators
* Support pathvar to manipulate other vars than PATH
* Add tests for pathvar and its subcommands
* Adjust pathvar tests to comply with env quirks
* Make pathvar tests work on non-Windows as well
* Compact the comments in pathvar tests
* Fix last failing test
Co-authored-by: Jakub Žádník <jakub.zadnik@tuni.fi>
This allows converting strings to filepaths without having to use
`path expand` roundtrip.
Filepaths are taken as-is without any validation/conversion.
* Fix swapped PATH env var separators
* Support pathvar to manipulate other vars than PATH
* Add tests for pathvar and its subcommands
* Fix PATH env name for Windows
Seems like Windows uses PATH as well.
Co-authored-by: Jakub Žádník <jakub.zadnik@tuni.fi>
This brings the features used by the `nu_plugin_fetch` and
`nu_plugin_post` in line and drops the default-features, reducing
the number of pulled-in dependencies and avoiding a second round of
compilations.
Signed-off-by: Daniel Egger <daniel@eggers-club.de>
* kind of works but not what we really want
* updated `into binary` and `first` to work better together
* attempt to fix wasm build problem
* attempt #2 to fix wasm stuff
This brings with it a reduction in the number of duplicated
dependencies that it has and the overhead it imposes on our
build.
The number of crates that need to be built for
`cargo build --features extra` drops by roughly 50.
On Windows, we used the `is-exeuctable` crate but on Unix, we
duplicated the check that it did, with one difference: We also
looked at whether or not it was a symlink.
The `is-executable` crate uses `std::fs::metadata` which follows
symlinks, so this scenario should never occur here, as it will
return the metadata for the target file.
Using the `is-executable` crate on both Unix and Windows lets us
make it non-optional. This lets us remove the `executable-support`
feature. (It is worth noting that this code didn't compile on
Windows when the `executable-support` feature was not specified.)
Right now, there is an alternate code path for `target_arch` being
`wasm32`. This isn't exactly correct as it should probably handle
something different for when the `target_os` is `wasi`.
Nothing used the `ptree` feature or optional dependency within
`nu-command` except to include it within the `version` output. This
may be related to when `nu-cli` also had a `ptree` feature, but
I'm not sure.
That leaves the code within `nu_plugin_tree` as the sole remaining
user of `ptree`, which is already covered by the feature `tree`
and included in the `version` output.
* Type in command description
* filter name change
* Clean column name
* Clippy error and updated polars version
* Lint correction in file
* CSV Infer schema optional
* Correct float operations
* changes in series castings to allow other types
* Clippy error correction
* Removed lists from command signatures
* Added not command for series
* take command with args
* set with idx command
* Type in command description
* filter name change
* Clean column name
* Clippy error and updated polars version
* Lint correction in file
* CSV Infer schema optional
* Correct float operations
* changes in series castings to allow other types
* Clippy error correction
* Removed lists from command signatures
* Added not command for series
Seems we do `ctrl` feature checks in `nu-cli` and `nu-command`. We should find a better way to report the enabled features un the `version` command without using the conditionals (or somewhere else)
* Type in command description
* filter name change
* Clean column name
* Clippy error and updated polars version
* Lint correction in file
* CSV Infer schema optional
* Correct float operations
* changes in series castings to allow other types
* Clippy error correction
* Type in command description
* filter name change
* Clean column name
* Clippy error and updated polars version
* Lint correction in file
* CSV Infer schema optional
* Correct float operations