chrono version update
# Description
upgrade chrono to 0.4.23
# Major Changes
If you're considering making any major change to nushell, before
starting work on it, seek feedback from regular contributors and get
approval for the idea from the core team either on
[Discord](https://discordapp.com/invite/NtAbbGn) or [GitHub
issue](https://github.com/nushell/nushell/issues/new/choose).
Making sure we're all on board with the change saves everybody's time.
Thanks!
# Tests + Formatting
Make sure you've done the following, if applicable:
- Add tests that cover your changes (either in the command examples, the
crate/tests folder, or in the /tests folder)
- Try to think about corner cases and various ways how your changes
could break. Cover those in the tests
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace --features=extra -- -D warnings -D
clippy::unwrap_used -A clippy::needless_collect` to check that you're
using the standard code style
- `cargo test --workspace --features=extra` to check that all tests pass
# After Submitting
* Help us keep the docs up to date: If your PR affects the user
experience of Nushell (adding/removing a command, changing an
input/output type, etc.), make sure the changes are reflected in the
documentation (https://github.com/nushell/nushell.github.io) after the
PR is merged.
Co-authored-by: Darren Schroeder <343840+fdncred@users.noreply.github.com>
# Description
This PR is a response to the issues raised in
https://github.com/nushell/nushell/pull/7087. It consists of two
changes:
* `export-env`, when evaluated in `overlay use`, will see the original
environment. Previously, it would see the environment from previous
overlay activation.
* Added a new `--reload` flag that reloads the overlay. Custom
definitions will be kept but the original definitions and environment
will be reloaded.
This enables a pattern when an overlay is supposed to shadow an existing
environment variable, such as `PROMPT_COMMAND`, but `overlay use` would
keep loading the value from the first activation. You can easily test it
by defining a module
```
module prompt {
export-env {
let-env PROMPT_COMMAND = (date now | into string)
}
}
```
Calling `overlay use prompt` for the first time changes the prompt to
the current time, however, subsequent calls of `overlay use` won't
change the time. That's because overlays, once activated, store their
state so they can be hidden and restored at later time. To force-reload
the environment, use the new flag: Calling `overlay use --reload prompt`
repeatedly now updates the prompt with the current time each time.
# User-Facing Changes
* When calling `overlay use`, if the module has an `export-env` block,
the block will see the environment as it is _before_ the overlay is
activated. Previously, it was _after_.
* A new `overlay use --reload` flag.
# Tests + Formatting
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A
clippy::needless_collect` to check that you're using the standard code
style
- `cargo test --workspace` to check that all tests pass
# After Submitting
If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
# Description
As title, when execute external sub command, auto-trimming end
new-lines, like how fish shell does.
And if the command is executed directly like: `cat tmp`, the result
won't change.
Fixes: #6816Fixes: #3980
Note that although nushell works correctly by directly replace output of
external command to variable(or other places like string interpolation),
it's not friendly to user, and users almost want to use `str trim` to
trim trailing newline, I think that's why fish shell do this
automatically.
If the pr is ok, as a result, no more `str trim -r` is required when
user is writing scripts which using external commands.
# User-Facing Changes
Before:
<img width="523" alt="img"
src="https://user-images.githubusercontent.com/22256154/202468810-86b04dbb-c147-459a-96a5-e0095eeaab3d.png">
After:
<img width="505" alt="img"
src="https://user-images.githubusercontent.com/22256154/202468599-7b537488-3d6b-458e-9d75-d85780826db0.png">
# Tests + Formatting
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace --features=extra -- -D warnings -D
clippy::unwrap_used -A clippy::needless_collect` to check that you're
using the standard code style
- `cargo test --workspace --features=extra` to check that all tests pass
# After Submitting
If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
Following up on #7180 with some feature cleanup:
- Move the `database` feature from `plugin` to `default`
- Rename the `database` feature to `sqlite`
- Remove `--features=extra` from a lot of scripts etc.
- No need to specify this, the `extra` feature is now the same as the
default feature set
- Remove the now-redundant 2nd Ubuntu test run
* remove export_env command
* remove several export env usage in test code
* adjust hiding relative test case
* fix clippy
* adjust tests
* update tests
* unignore these tests to expose ut failed
* using `use` instead of `overlay use` in some tests
* Revert "using `use` instead of `overlay use` in some tests"
This reverts commit 2ae24b24c3.
* Revert "adjust hiding relative test case"
This reverts commit 4369af6d05.
* Bring back module example
* Revert "update tests"
This reverts commit 6ae94ef513.
* Fix tests
* "Fix" a test
* Remove remaining deprecated env functionality
* Re-enable environment hiding for `hide`
To not break virtualenv since the overlay update is not merged yet
* Fix hiding env in `hide` and ignore some tests
Co-authored-by: kubouch <kubouch@gmail.com>
* Copy lev_distance.rs from the rust compiler
* Minor changes to code from rust compiler
* "Did you mean" suggestions: test instrumented to generate markdown report
* Did you mean suggestions: delete test instrumentation
* Fix tests
* Fix test
`foo` has a genuine match: `for`
* Improve tests
* trim overlay name
* format
* Update tests/overlays/mod.rs
Co-authored-by: Stefan Holderbach <sholderbach@users.noreply.github.com>
* cleanup
* new tests
Co-authored-by: Stefan Holderbach <sholderbach@users.noreply.github.com>
* Add support for Arrow IPC file format
Add support for Arrow IPC file format to dataframes commands. Support
opening of Arrow IPC-format files with extension '.arrow' or '.ipc' in
the open-df command. Add a 'to arrow' command to write a dataframe to
Arrow IPC format.
* Add unit test for open-df on Arrow
* Add -t flag to open-df command
Add a `--type`/`-t` flag to the `open-df` command, to explicitly specify
the type of file being used. Allowed values are the same at the set of
allowed file extensions.
* Initialize join.rs as a copy of collect.rs
* Evolve StrCollect into StrJoin
* Replace 'str collect' with 'str join' everywhere
git ls-files | lines | par-each { |it| sed -i 's,str collect,str join,g' $it }
* Deprecate 'str collect'
* Revert "Deprecate 'str collect'"
This reverts commit 959d14203e.
* Change `str collect` help message to say that it is deprecated
We cannot remove `str collect` currently (i.e. via
`nu_protocol::ShellError::DeprecatedCommand` since a prominent project
uses the API:
b85542c31c/src/virtualenv/activation/nushell/activate.nu (L43)
Rename `all?`, `any?` and `empty?` to `all`, `any` and `is-empty` for sake of simplicity and consistency.
- More understandable for newcomers, that these commands are no special to others.
- `?` syntax did not really aprove readability. For me it made it worse.
- We can reserve `?` syntax for any other nushell feature.
* Add source-env test for dynamic path
* Use correct module ID for env overlay imports
* Remove parser check from "overlay list"
It would cause unnecessary errors from some inner scope if some
overlay module was also defined in some inner scope.
* Restore Cargo.lock back
* Remove comments
* start working on source-env
* WIP
* Get most tests working, still one to go
* Fix file-relative paths; Report parser error
* Fix merge conflicts; Restore source as deprecated
* Tests: Use source-env; Remove redundant tests
* Fmt
* Respect hidden env vars
* Fix file-relative eval for source-env
* Add file-relative eval to "overlay use"
* Use FILE_PWD only in source-env and "overlay use"
* Ignore new tests for now
This will be another issue
* Throw an error if setting FILE_PWD manually
* Fix source-related test failures
* Fix nu-check to respect FILE_PWD
* Fix corrupted spans in source-env shell errors
* Fix up some references to old source
* Remove deprecation message
* Re-introduce deleted tests
Co-authored-by: kubouch <kubouch@gmail.com>
* Add hide-env to hide env vars; Cleanup tests
Also, there were some old unalias tests that I converted to hide.
* Add missing file
* Re-enable hide for env vars
* Fix test
* Rename did you mean error back
It was causing random tests to break
* Add decimals to int when using `into string --decimals`
* Add tests for `into string` when converting int with `--decimals`
* Apply formatting
* Merge `into_str` test files
* Comment out unused code and add TODOs
* Use decimal separator depending on system locale
* Add test helper to run closure in different locale
* Add tests for int-to-string conversion using different locales
* Add utils function to get system locale
* Add panic message when locking mutex fails
* Catch and resume panic later to prevent Mutex poisoning when test fails
* Move test to `nu-test-support` to keep `nu-utils` free of `nu-*` dependencies
See https://github.com/nushell/nushell/pull/6085#issuecomment-1193131694
* Rename test support fn `with_fake_locale` to `with_locale_override`
* Move `get_system_locale()` to `locale` module
* Allow overriding locale with special env variable (when not in release)
* Use special env var to override locale during testing
* Allow callback to return a value in `with_locale_override()`
* Allow multiple options in `nu!` macro
* Allow to set locale as `nu!` macro option
* Use new `locale` option of `nu!` macro instead of `with_locale_override`
Using the `locale` options does not lock the `LOCALE_OVERRIDE_MUTEX`
mutex in `nu-test-support::locale_override` but instead calls the `nu`
command directly with the `NU_LOCALE_OVERRIDE` environment variable.
This allows for parallel test excecution.
* Fix: Add option identifier for `cwd` in usage of `nu!` macro
* Rely on `Display` trait for formatting `nu!` macro command
- Removed the `DisplayPath` trait
- Implement `Display` for `AbsolutePath`, `RelativePath` and
`AbsoluteFile`
* Default to locale `en_US.UTF-8` for tests when using `nu!` macro
* Add doc comment to `nu!` macro
* Format code using `cargo fmt --all`
* Pass function directly instead of wrapping the call in a closure
https://rust-lang.github.io/rust-clippy/master/index.html#redundant_closure
* Pass function to `or_else()` instead of calling it inside `or()`
https://rust-lang.github.io/rust-clippy/master/index.html#or_fun_call
* Fix: Add option identifier for `cwd` in usage of `nu!` macro
* Allow private imports inside modules
Can call `use ...` inside modules now.
* Add more tests
* Add a leak test
* Refactor exportables; Prepare for 'export use'
* Fix description
* Implement 'export use' command
This allows re-exporting module's commands and aliases from another
module.
* Add more tests; Fix import pattern list strings
The import pattern strings didn't trim the surrounding quotes.
* Add ignored test
* Skeleton implementation
Lots and lots of TODOs
* Bootstrap simple CustomValue plugin support test
* Create nu_plugin_custom_value
* Skeleton for nu_plugin_custom_values
* Return a custom value from plugin
* Encode CustomValues from plugin calls as PluginResponse::PluginData
* Add new PluginCall variant CollapseCustomValue
* Handle CollapseCustomValue plugin calls
* Add CallInput::Data variant to CallInfo inputs
* Handle CallInfo with CallInput::Data plugin calls
* Send CallInput::Data if Value is PluginCustomValue from plugin calls
* Remove unnecessary boxing of plugins CallInfo
* Add fields needed to collapse PluginCustomValue to it
* Document PluginCustomValue and its purpose
* Impl collapsing using plugin calls in PluginCustomValue::to_base_value
* Implement proper typetag based deserialization for CoolCustomValue
* Test demonstrating that passing back a custom value to plugin works
* Added a failing test for describing plugin CustomValues
* Support describe for PluginCustomValues
- Add name to PluginResponse::PluginData
- Also turn it into a struct for clarity
- Add name to PluginCustomValue
- Return name field from PluginCustomValue
* Demonstrate that plugins can create and handle multiple CustomValues
* Add bincode to nu-plugin dependencies
This is for demonstration purposes, any schemaless binary seralization
format will work. I picked bincode since it's the most popular for Rust
but there are defintely better options out there for this usecase
* serde_json::Value -> Vec<u8>
* Update capnp schema for new CallInfo.input field
* Move call_input capnp serialization and deserialization into new file
* Deserialize Value's span from Value itself instead of passing call.head
I am not sure if this was correct and I am breaking it or if it was a
bug, I don't fully understand how nu creates and uses Spans. What should
reuse spans and what should recreate new ones?
But yeah it felt weird that the Value's Span was being ignored since in
the json serializer just uses the Value's Span
* Add call_info value round trip test
* Add capnp CallInput::Data serialization and deserialization support
* Add CallInfo::CollapseCustomValue to capnp schema
* Add capnp PluginCall::CollapseCustomValue serialization and deserialization support
* Add PluginResponse::PluginData to capnp schema
* Add capnp PluginResponse::PluginData serialization and deserialization support
* Switch plugins::custom_values tests to capnp
Both json and capnp would work now! Sadly I can't choose both at the
same time :(
* Add missing JsonSerializer round trip tests
* Handle plugin returning PluginData as a response to CollapseCustomValue
* Refactor plugin calling into a reusable function
Many less levels of indentation now!
* Export PluginData from nu_plugin
So plugins can create their very own serve_plugin with whatever
CustomValue behavior they may desire
* Error if CustomValue cannot be handled by Plugin
* Updated nu_with_plugins to handle new nushell
- Now it requires the plugin format and name to be passed in, because
we can't really guess the format
- It calls `register` with format and plugin path
- It creates a temporary folder and in it an empty temporary plugin.nu
so that the tests don't conflict with each other or with local copy of
plugin.nu
- Instead of passing the commands via stdin it passes them via the new
--commands command line argument
* Rename path to command for clarity
* Enable core_inc tests
Remove deprecated inc feature and replace with new plugin feature
* Update core_inc tests for new nu_with_plugins syntax
* Rework core_inc::can_only_apply_one
The new inc plugin doesn't error if passed more than one but instead
chooses the highest increment
* Gate all plugin tests behind feature = "plugin" instead of one by one
* Remove format!-like behavior from nu_with_plugins
nu_with_plugins had format!-like behavior where it would allow calls
such as this:
```rs
nu_with_plugins!(
cwd: "dir/",
"open {} | get {}",
"Cargo.toml",
"package.version"
)
```
And although nifty it seems to have never been used before and the same
can be achieved with a format! like so:
```rs
nu_with_plugins!(
cwd: "dir/",
format!("open {} | get {}", "Cargo.toml", "package.version")
)
```
So I am removing it to keep the complexity of the macro in check
* Add multi-plugin support to nu_with_plugins
Useful for testing interactions between plugins
* Alternative 1: run `cargo build` inside of tests
* Handle Windows by canonicalizing paths and add .exe
One VM install later and lots of learning about how command line
arguments work and here we are
* introduce external command runs to failed error, and implement semicolon relative logic
* ignore test due to semicolon works
* not raise ShellError for external commands
* update comment
* add relative test in for windows
* fix type-o
Co-authored-by: Darren Schroeder <343840+fdncred@users.noreply.github.com>
* Remove comment
* Split delta and environment merging
* Move table mode to a more logical place
* Cleanup
* Merge environment after reading default_env.nu
* Fmt
* Allow keeping selected env from removed overlay
* Remove some duplicate code
* Change --keep-all back to --keep-custom
Because, apparently, you cannot have a named flag called --keep-all,
otherwise tests fail?
* Fix missing line and wrong test value
* (WIP) Initial messy support for hooks as strings
* Cleanup after running condition & hook code
Also, remove prints
* Move env hooks eval into its own function
* Add env change hooks to simulator
* Fix hooks simulator not running env hooks properly
* Add missing hooks test file
* Expand hooks tests
* Add blocks as env hooks; Preserve hook environment
* Add full eval to pre prompt/exec hooks; Fix panic
* Rename env change hook back to orig. name
* Print err on test failure; Add list of hooks test
* Consolidate condition block; Fix panic; Misc
* CHange test to use real file
* Remove unused stuff
* Fix potential panics; Clean up errors
* Remove commented unused code
* Clippy: Fix extra references
* Add back support for old-style hooks
* Reorder functions; Fmt
* Fix test on Windows
* Add more test cases; Simplify some error reporting
* Add more tests for setting correct before/after
* Move pre_prompt hook to the beginning
Since we don't have a prompt or blocking on user input, all hooks just
follow after each other.
* fix argument type
* while run external, convert list argument to str
* fix argument converting logic
* using parse_list_expression instead of parse_full_cell_path
* make parsing logic more explicit
* revert changes
* add tests
* Allow env vars to be kept from removed overlay
* Rename --keep to --keep-custom; Add new test
* Rename some symbols
* (WIP) Start working on --keep for defs and aliases
* Fix decls/aliases not melting properly
* Use id instead of the whole cloned overlay
* Rewrite overlay remove for no reason
Doesn't fix the bug but at least looks better.
* Rename variable
* Fix adding overlay env vars
* Add more tests; Fmt + Clippy
* Add Nushell REPL simulator; Fix bug in overlay add
The `nu_repl` function takes an array of strings and processes them as
if they were REPL lines entered one by one. This helps to discover bugs
due to the state changes between the parse and eval stages.
* Fix REPL tests on Windows
* WIP: Start laying overlays
* Rename Overlay->Module; Start adding overlay
* Revamp adding overlay
* Add overlay add tests; Disable debug print
* Fix overlay add; Add overlay remove
* Add overlay remove tests
* Add missing overlay remove file
* Add overlay list command
* (WIP?) Enable overlays for env vars
* Move OverlayFrames to ScopeFrames
* (WIP) Move everything to overlays only
ScopeFrame contains nothing but overlays now
* Fix predecls
* Fix wrong overlay id translation and aliases
* Fix broken env lookup logic
* Remove TODOs
* Add overlay add + remove for environment
* Add a few overlay tests; Fix overlay add name
* Some cleanup; Fix overlay add/remove names
* Clippy
* Fmt
* Remove walls of comments
* List overlays from stack; Add debugging flag
Currently, the engine state ordering is somehow broken.
* Fix (?) overlay list test
* Fix tests on Windows
* Fix activated overlay ordering
* Check for active overlays equality in overlay list
This removes the -p flag: Either both parser and engine will have the
same overlays, or the command will fail.
* Add merging on overlay remove
* Change help message and comment
* Add some remove-merge/discard tests
* (WIP) Track removed overlays properly
* Clippy; Fmt
* Fix getting last overlay; Fix predecls in overlays
* Remove merging; Fix re-add overwriting stuff
Also some error message tweaks.
* Fix overlay error in the engine
* Update variable_completions.rs
* Adds flags and optional arguments to view-source (#5446)
* added flags and optional arguments to view-source
* removed redundant code
* removed redundant code
* fmt
* fix bug in shell_integration (#5450)
* fix bug in shell_integration
* add some comments
* enable cd to work with directory abbreviations (#5452)
* enable cd to work with abbreviations
* add abbreviation example
* fix tests
* make it configurable
* make cd recornize symblic link (#5454)
* implement seq char command to generate single character sequence (#5453)
* add tmp code
* add seq char command
* Add split number flag in `split row` (#5434)
Signed-off-by: Yuheng Su <gipsyh.icu@gmail.com>
* Add two more overlay tests
* Add ModuleId to OverlayFrame
* Fix env conversion accidentally activating overlay
It activated overlay from permanent state prematurely which would
cause `overlay add` to misbehave.
* Remove unused parameter; Add overlay list test
* Remove added traces
* Add overlay commands examples
* Modify TODO
* Fix $nu.scope iteration
* Disallow removing default overlay
* Refactor some parser errors
* Remove last overlay if no argument
* Diversify overlay examples
* Make it possible to update overlay's module
In case the origin module updates, the overlay add loads the new module,
makes it overlay's origin and applies the changes. Before, it was
impossible to update the overlay if the module changed.
Co-authored-by: JT <547158+jntrnr@users.noreply.github.com>
Co-authored-by: pwygab <88221256+merelymyself@users.noreply.github.com>
Co-authored-by: Darren Schroeder <343840+fdncred@users.noreply.github.com>
Co-authored-by: WindSoilder <WindSoilder@outlook.com>
Co-authored-by: Yuheng Su <gipsyh.icu@gmail.com>
* nu-cli/completions: fix paths with special chars
* add backticks
* fix replace
* added single quotes to check list
* check escape using fold
* fix clippy errors
* fix comment line
* fix conflicts
* change to vec
* skip sort checking
* removed invalid windows path
* remove comment
* added tests for escape function
* fix fn import
* fix fn import error
* test windows issue fix
* fix windows backslash path in the tests
* show expected path on error
* skip test for windows
* nu-cli: added tests for file completions
* test adding extra sort
* Feature/refactor completion options (#5228)
* Copy completion filter to custom completions
* Remove filter function from completer
This function was a no-op for FileCompletion and CommandCompletion.
Flag- and VariableCompletion just filters with `starts_with` which
happens in both completers anyway and should therefore also be a no-op.
The remaining use case in CustomCompletion was moved into the
CustomCompletion source file.
Filtering should probably happen immediately while fetching completions
to avoid unnecessary memory allocations.
* Add get_sort_by() to Completer trait
* Remove CompletionOptions from Completer::fetch()
* Fix clippy lints
* Apply Completer changes to DotNuCompletion
* add os to $nu based on rust's understanding (#5243)
* add os to $nu based on rust's understanding
* add a few more constants
Co-authored-by: Richard <Tropid@users.noreply.github.com>
Co-authored-by: Darren Schroeder <343840+fdncred@users.noreply.github.com>
* Fix failing unit tests on Windows (#5142)
Fix let_env_expressions failing on Windows:
The env expression uses PATH, but on windows Path is used.
Fix correctly_escape_external_arguments, execute_binary_in_string
failing on Windows:
Using cococo now to make sure testresults are platform independent
* Update macros.rs
Co-authored-by: JT <547158+jntrnr@users.noreply.github.com>
* Remove panic from BlockCommands run function
Instead of panicing, the run method now returns an error to prevent
nushell from unexpected termination.
* Add ability to open command to run with blocks
The open command tries to parse the content of the file
if there is a command called 'from (file ending)'. This works
fine if the command was 'built in' because the run method doesn't
fail in this case. It did fail on a BlockCommand, though.
This change will first probe if the command contains a block and
evaluate it, if this is the case. If there is no block, it will run
the command the same way as before.
* Add test open files with BlockCommands
* Update open.rs
* Adjust file type on open with BlockCommand parser
Co-authored-by: JT <547158+jntrnr@users.noreply.github.com>
* Add test for passing binary data through externals
This change adds an ignored test to confirm that binary data is passed
correctly between externals to be enabled in a later commit along with
the fix.
To assist in platform agnostic testing of binary data a couple of
additional testbins were added to allow testing on `Value::Binary` inside
`ExternalStream`.
* Support binary data to stdin of run-external
Prior to this change, any pipeline producing binary data (not detected
as string) then feed into an external would be ignored due to
run-external only supporting `Value::String` on stdin.
This change adds binary stdin support for externals allowing something
like this for example:
〉^cat /dev/urandom | ^head -c 1MiB | ^pv -b | ignore
1.00MiB
This would previously output `0.00 B [0.00 B/s]` due to the data not
being pushed to stdin at each stage.
* Refactor & fix which
Instead of fetching all definitions / aliases, only show the one that is
visible.
* Fix $nu.scope to show only visible definitions
* Add missing tests file; Rename one which test
* fix#4161
println! and friends will panic on BrokenPipe. The solution is to use
writeln! instead, and ignore the error (or do we want to do something else?)
* test that nu doesn't panic in case of BrokenPipe error
* fixup! test that nu doesn't panic in case of BrokenPipe error
* make do_not_panic_if_broken_pipe only run on UNIX systems
* fix#4140
We are passing commands into a shell underneath but we were not
escaping arguments correctly. This new version of the code also takes
into consideration the ";" and "&" characters, which have special
meaning in shells.
We would probably benefit from a more robust way to join arguments to
shell programs. Python's stdlib has shlex.join, and perhaps we can
take that implementation as a reference.
* clean up escaping of posix shell args
I believe the right place to do escaping of arguments was in the
spawn_sh_command function. Note that this change prevents things like:
^echo "$(ls)"
from executing the ls command. Instead, this will just print
$(ls)
The regex has been taken from the python stdlib implementation of shlex.quote
* fix non-literal parameters and single quotes
* address clippy's comments
* fixup! address clippy's comments
* test that subshell commands are sanitized properly
```
> [
[ msg, labels, span];
["The message", "Helpful message here", ([[start, end]; [0, 141]])]
] | error make
error: The message
┌─ shell:1:1
│
1 │ ╭ [
2 │ │ [ msg, labels, span];
3 │ │ ["The message", "Helpful message here", ([[start, end]; [0, 141]])]
│ ╰─────────────────────────────────────────────────────────────────────^ Helpful message here
```
Adding a more flexible approach for creating error values. One use case, for instance is the
idea of a test framework. A failed assertion instead of printing to the screen it could create
tables with more details of the failed assertion and pass it to this command for making a full
fledge error that Nu can show. This can (and should) be extended for capturing error values as well
in the pipeline. One could also use it for inspection.
For example: `.... | error inspect { # inspection here }`
or "error handling" as well, like so: `.... | error capture { fix here }`
However, we start here only with `error make` that creates an error value for you with limited support for the time being.
* Resolve rebase artifacts
* Remove leftover dependencies on removed feature
* Remove unnecessary 'pub'
* Start taking notes and fooling around
* Split canonicalize to two versions; Add TODOs
One that takes `relative_to` and one that doesn't.
More TODO notes.
* Merge absolutize to and rename resolve_dots
* Add custom absolutize fn and use it in path expand
* Convert a couple of dunce::canonicalize to ours
* Update nu-path description
* Replace all canonicalize with nu-path version
* Remove leftover dunce dependencies
* Fix broken autocd with trailing slash
Trailing slash is preserved *only* in paths that do not contain "." or
"..". This should be fixed in the future to cover all paths but for now
it at least covers basic cases.
* Use dunce::canonicalize for canonicalizing
* Alow cd recovery from non-existent cwd
* Disable removed canonicalize functionality tests
Remove unused import
* Break down nu-path into separate modules
* Remove unused public imports
* Remove abundant cow mapping
* Fix clippy warning
* Reformulate old canonicalize tests to expand_path
They wouldn't work with the new canonicalize.
* Canonicalize also ~ and ndots; Unify path joining
Also, add doc comments in nu_path::expansions.
* Add comment
* Avoid expanding ndots if path is not valid UTF-8
With this change, no lossy path->string conversion should happen in the
nu-path crate.
* Fmt
* Slight expand_tilde refactor; Add doc comments
* Start nu-path integration tests
* Add tests TODO
* Fix docstring typo
* Fix some doc strings
* Add README for nu-path crate
* Add a couple of canonicalize tests
* Add nu-path integration tests
* Add trim trailing slashes tests
* Update nu-path dependency
* Remove unused import
* Regenerate lockfile
* Allow different names for ...rest
* Resolves#3945
* This change requires an explicit name for the rest argument in `WholeStreamCommand`,
which is why there are so many changed files.
* Remove redundant clone
* Add tests
* Allow environment variables to be hidden
This change allows environment variables in Nushell to have a value of
`Nothing`, which can be set by the user by passing `$nothing` to
`let-env` and friends.
Environment variables with a value of Nothing behave as if they are not
set at all. This allows a user to shadow the value of an environment
variable in a parent scope, effectively removing it from their current
scope. This was not possible before, because a scope can not affect its
parent scopes.
This is a workaround for issues like #3920.
Additionally, this allows a user to simultaneously set, change and
remove multiple environment variables via `load-env`. Any environment
variables set to $nothing will be hidden and thus act as if they are
removed. This simplifies working with virtual environments, which rely
on setting multiple environment variables, including PATH, to specific
values, and remove/change them on deactivation.
One surprising behavior is that an environment variable set to $nothing
will act as if it is not set when querying it (via $nu.env.X), but it is
still possible to remove it entirely via `unlet-env`. If the same
environment variable is present in the parent scope, the value in the
parent scope will be visible to the user. This might be surprising
behavior to users who are not familiar with the implementation details.
An additional corner case is the the shorthand form of `with-env` does
not work with this feature. Using `X=$nothing` will set $nu.env.X to the
string "$nothing". The long-form works as expected: `with-env [X
$nothing] {...}`.
* Remove unused import
* Allow all primitives to be convert to strings
Some environment variables, such as `RUST_LOG` include equals signs. Nushell
should support this in the shorthand environment variable syntax so that
developers using these variables can control them easily. We accomplish this by
swapping `std::str::split` for `std::str::splitn`, which ensures that we only
consider the first equals sign in the string instead of all of them, which we
did previously.
Closes#3867
Added test cases that ensure that special characters in path names are passed
to external commands correctly. These cases have been implemented with rstest
to reuse existing test code.
* Add the load-env command
load-env can be used to add environment variables dynamically via an
InputStream. This allows developers to create tools that output environment
variables as key-value pairs, then have the user load those variables in using
load-env. This supplants most of the need for an `eval` command, which is
mostly used in POSIX envs for setting env vars.
Fixes#3481
* fixup! Add the load-env command
* Revert "History, more test coverage improvements, and refactorings. (#3217)"
This reverts commit 8fc8fc89aa.
* Add tests
* Refactor .nu-env
* Change logic of Config write to logic of read()
* Fix reload always appends to old vars
* Fix reload always takes last_modified of global config
* Add reload_config in evaluation context
* Reload config after writing to it in cfg set / cfg set_into
* Add --no-history to cli options
* Use --no-history in tests
* Add comment about maybe_print_errors
* Get ctrl_exit var from context.global_config
* Use context.global_config in command "config"
* Add Readme in engine how env vars are now handled
* Update docs from autoenv command
* Move history_path from engine to nu_data
* Move load history out of if
* No let before return
* Add import for indexmap
Improvements overall to Nu. Also among the changes here, we can also be more confident towards incorporating `3041`. End to end tests for checking envs properly exported to externals is not added here (since it's in the other PR)
A few things added in this PR (probably forgetting some too)
* no writes happen to history during test runs.
* environment syncing end to end coverage added.
* clean up / refactorings few areas.
* testing API for finer control (can write tests passing more than one pipeline)
* can pass environment variables in tests that nu will inherit when running.
* No longer needed.
* no longer under a module. No need to use super.
* Playground infraestructure (tests, etc) additions.
A few things to note:
* Nu can be started with a custom configuration file (`nu --config-file /path/to/sample_config.toml`). Useful for mocking the configuration on test runs.
* When given a custom configuration file Nu will save any changes to the file supplied appropiately.
* The `$nu.config-path` variable either shows the default configuration file (or the custom one, if given)
* We can now run end to end tests with finer grained control (currently, since this is baseline work, standard out) This will allow to check things like exit status, assert the contents with a format, etc)
* Remove (for another PR)
* update docs to refer to length instead of count
* rename count to length
* change all occurrences of 'count' to 'length' in tests
* format length command
The autoenv logic mutates environment variables in the running session as
it operates and decides what to do for trusted directories containing `.nu-env`
files. Few of the ways to interact with it were all in a single test function.
We separate out all the ways that were done in the single test function to document
it better. This will greatly help once we start refactoring our way out from setting
environment variables this way to just setting them to `Scope`.
This is part of an on-going effort to keep variables (`PATH` and `ENV`)
in our `Scope` and rely on it for everything related to variables.
We expect to move away from setting (`std::*`) envrironment variables in the current
running process. This is non-trivial since we need to handle cases from vars
coming in from the outside world, prioritize, and also compare to the ones
we have both stored in memory and in configuration files.
Also to send out our in-memory (in `Scope`) variables properly to external
programs once we no longer rely on `std::env` vars from the running process.
* Begin allowing comments and multiline scripts.
* clippy
* Finish moving to groups. Test pass
* Keep going
* WIP
* WIP
* BROKEN WIP
* WIP
* WIP
* Fix more tests
* WIP: alias starts working
* Broken WIP
* Broken WIP
* Variables begin to work
* captures start working
* A little better but needs fixed scope
* Shorthand env setting
* Update main merge
* Broken WIP
* WIP
* custom command parsing
* Custom commands start working
* Fix coloring and parsing of block
* Almost there
* Add some tests
* Add more param types
* Bump version
* Fix benchmark
* Fix stuff
We introduce the `plugin` nu sub command (`nu plugin`) with basic plugin
loading support. We can choose to load plugins from a directory. Originally
introduced to make integration tests faster (by not loading any plugins on startup at all)
but `nu plugin --load some_path ; test_pipeline_that_uses_plugins_just_loaded` does not see it.
Therefore, a `nu_with_plugins!` macro for tests was introduced on top of nu`s `--skip-plugins`
switch executable which is set to true when running the integration tests that use the `nu!` macro now..
* Implement exclusive and inclusive ranges with .. and ..=
This commit adds right-exclusive ranges.
The original a..b inclusive syntax was changed to reflect the Rust notation.
New a..=b syntax was introduced to have the old behavior.
Currently, both a.. and b..= is valid, and it is unclear whether it's valid
to impose restrictions.
The original issue suggests .. for inclusive and ..< for exclusive ranges,
this can be implemented by making simple changes to this commit.
* Fix collect tests by changing ranges to ..=
* Fix clippy lints in exclusive range matching
* Implement exclusive ranges using `..<`
* Modify testcase
* Run exitscript in the folder it was specified
* Update documentation
* Add comment
* Borrow instead of clone
* Does this just... work on windows?
* fmt
* as_str
* Collapse if by order of clippy
* Support windows
* fmt
* refactor tests
* fmt
* This time it will work on windows FOR SURE
* Remove debug prints
* Comment
* Refactor tests
* fmt
* fix spelling
* update comment
* Working towards a PoC for wasm
* Move bson and sqlite to plugins
* proof of concept now working
* tests are green
* Add CI test for --no-default-features
* Fix some tests
* Fix clippy and windows build
* More fixes
* Fix the windows build
* Fix the windows test
* Fix autoenv executing scripts multiple times
Previously, if the user had only specified entry or exitscripts the scripts
would execute many times. This should be fixed now
* Add tests
* Run exitscripts
* More tests and fixes to existing tests
* Test solution with visited dirs
* Track visited directories
* Comments and fmt
* add test basic_autoenv_vars_are_added
* Tests
* Entry and exit scripts
* Recursive set and overwrite
* Make sure that overwritten vals are restored
* Move tests to autoenv
* Move tests out of cli crate
* Tests help, apparently. Windows has issues
On windows, .nu-env is not applied immediately after running autoenv trust.
You have to cd out of the directory for it to work.
* Sort paths non-lexicographically
* Sibling dir test
* Revert "Sort paths non-lexicographically"
This reverts commit 72e4b856af.
* Rename test
* Change conditions
* Revert "Revert "Sort paths non-lexicographically""
This reverts commit 71606bc62f.
* Set vars as they are discovered
This means that if a parent directory is untrusted,
the variables in its child directories are still set properly.
* format
* Fix cleanup issues too
* Run commands in their separate functions
* Make everything into one large function like all the cool kids
* Refactoring
* fmt
* Debugging windows path issue
* Canonicalize
* Trim whitespace
* On windows, use echo nul instead of touch to create file in test
* Avoid cloning by using drain()
Our own custom escaping unfortunately is far too simple to cover all cases.
Instead, the parser will now do no transforms on the args passed to an external
command, letting the process spawning library deal with doing the appropriate
escaping.
For example, when running the following:
crates/nu-cli/src
nushell currently parses this as an external command. Before running the command, we check to see if
it's a directory. If it is, we "auto cd" into that directory, otherwise we go through normal
external processing.
If we put a trailing slash on it though, shells typically interpret that as "user is explicitly
referencing directory". So
crates/nu-cli/src/
should not be interpreted as "run an external command". We intercept a trailing slash in the head
position of a command in a pipeline as such, and inject a `cd` internal command.
* from-eml initial ver
* Adding tests for `from-eml`
* Add eml to prepares_and_decorates_filesystem_source_files
* Sort the file order
Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com>
* headers plugin
* Remove plugin
* Add non-functioning headers command
* Add ability to extract headers from first row
* Refactor header extraction
* Rebuild indexmap with proper headers
* Rebuild result properly
* Compiling, probably wrapped too much?
* Refactoring
* Deal with case of empty header cell
* Deal with case of empty header cell
* Fix formatting
* Fix linting, attempt 2.
* Move whole_stream_command(Headers) to more appropriate section
* ... more linting
* Return Err(ShellError...) instead of panic, yield each row instead of entire table
* Insert Column[index] if no header info is found.
* Update error description
* Add initial test
* Add tests for headers command
* Lint test cases in headers
* Change ShellError for headers, Add sample_headers file to utils.rs
* Add empty sheet to test file
* Revert "Add empty sheet to test file"
This reverts commit a4bf38a31d.
* Show error message when given empty table
* WIP: move to bytes codec
* Progress on adding collect helpers
* Progress on adding collect helpers
* Add in line splitting back to lines
* Lines outputting line primitives
* Close to ready?
* Finish fixing lines
* clippy fixes
* fmt fixes
* removed unused code
* Cleanup a few bits
* Cleanup a few bits
* Cleanup a few more bits
* Fix failing test with corrected test case
This improves incremental build time when working on what was previously
the root package. For example, previously all plugins would be rebuilt
with a change to `src/commands/classified/external.rs`, but now only
`nu-cli` will have to be rebuilt (and anything that depends on it).
In particular, one thing that we can't (properly) do before this commit
is consuming an infinite input stream. For example:
```
yes | grep y | head -n10
```
will give 10 "y"s in most shells, but blocks indefinitely in nu. This PR
resolves that by doing blocking I/O in threads, and reducing the `await`
calls we currently have in our pipeline code.
* typo fixes
* Change signature to take in short-hand flags
* update help information
* Parse short-hand flags as their long counterparts
* lints
* Modified a couple tests to use shorthand flags
* Fixed mv not throwing error when the source path was invalid
* Fixed failing test
* Fixed another lint error
* Fix $PATH conflicts in .gitpod.Dockerfile (#1349)
- Use the correct user for gitpod Dockerfile.
- Remove unneeded packages (curl, rustc) from gitpod Dockerfile.
* Added test to check for the error
* Fixed linting error
* Fixed mv not moving files on Windows. (#1342)
Move files correctly in windows.
* Fixed mv not throwing error when the source path was invalid
* Fixed failing test
* Fixed another lint error
* Added test to check for the error
* Fixed linting error
* Changed error message
* Typo and fixed test
Co-authored-by: Sean Hellum <seanhellum45@gmail.com>
* Added attributes to from-xml command
* Added attributes as their own rows
* Removed unneccesary lifetime declarations
* from-xml now has children and attributes side by side
* Fixed tests and linting
* Fixed lint-problem
* Switch to using `shell`
Switch to using the shell for subprocess to enable more natural shelling out.
* Update external.rs
* This is a test with .shell() for external
* El pollo loco's PR
* co co co
* Attempt to fix windows
* Fmt
* Less is more?
Co-authored-by: Andrés N. Robalino <andres@androbtech.com>
Restructure and streamline token expansion
The purpose of this commit is to streamline the token expansion code, by
removing aspects of the code that are no longer relevant, removing
pointless duplication, and eliminating the need to pass the same
arguments to `expand_syntax`.
The first big-picture change in this commit is that instead of a handful
of `expand_` functions, which take a TokensIterator and ExpandContext, a
smaller number of methods on the `TokensIterator` do the same job.
The second big-picture change in this commit is fully eliminating the
coloring traits, making coloring a responsibility of the base expansion
implementations. This also means that the coloring tracer is merged into
the expansion tracer, so you can follow a single expansion and see how
the expansion process produced colored tokens.
One side effect of this change is that the expander itself is marginally
more error-correcting. The error correction works by switching from
structured expansion to `BackoffColoringMode` when an unexpected token
is found, which guarantees that all spans of the source are colored, but
may not be the most optimal error recovery strategy.
That said, because `BackoffColoringMode` only extends as far as a
closing delimiter (`)`, `]`, `}`) or pipe (`|`), it does result in
fairly granular correction strategy.
The current code still produces an `Err` (plus a complete list of
colored shapes) from the parsing process if any errors are encountered,
but this could easily be addressed now that the underlying expansion is
error-correcting.
This commit also colors any spans that are syntax errors in red, and
causes the parser to include some additional information about what
tokens were expected at any given point where an error was encountered,
so that completions and hinting could be more robust in the future.
Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com>
Co-authored-by: Andrés N. Robalino <andres@androbtech.com>
Also, this commit makes `ls` a per-item command.
A command that processes things item by item may still take some time to stream
out the results from a single item. For example, `ls` on a directory with a lot
of files could be interrupted in the middle of showing all of these files.
This commit changes the way we shell out externals when using the `"$it"` argument. Also pipes per row to an external's stdin if no `"$it"` argument is present for external commands.
Further separation of logic (preparing the external's command arguments, getting the data for piping, emitting values, spawning processes) will give us a better idea for lower level details regarding external commands until we can find the right abstractions for making them more generic and unify within the pipeline calling logic of Nu internal's and external's.
* Put a sample_data.ods file for testing
This is a copy of the sample_data.xlsx file but in ods format
* Add the from-ods command
Most of the work was doing `rg xlsx` and then copy/paste with light editing
* Add tests for the from-ods command
* Fix failing test
The problem was improper filename sorting in the test `prepares_and_decorates_filesystem_source_files`
* Clippy fixes
* Finish converting to use clippy
* fix warnings in new master
* fix windows
* fix windows
Co-authored-by: Artem Vorotnikov <artem@vorotnikov.me>
* start playing with ways to use the uniq command
* WIP
* Got uniq working, but still need to figure out args issue and add tests
* Add some tests for uniq
* fmt
* remove commented out code
* Add documentation and some additional tests showing uniq values and rows. Also removed args TODO
* add changes that didn't get committed
* whoops, I didn't save the docs correctly...
* fmt
* Add a test for uniq with nested json
* Add another test
* Fix unique-ness when json keys are out of order and make the test json more complicated
Add tests for ~tilde expansion:
- test that "~" is expanded (no more "~" in output)
- ensure that "1~1" is not expanded to "1/home/user1" as it was
before
Fixes#972
Note: the first test does not check the literal expansion because
the path on Windows is expanded as a Linux path, but the correct
expansion may come for free once `shellexpand` will use the `dirs`
crate too (https://github.com/netvl/shellexpand/issues/3).
This commit contains two improvements:
- Support for a Range syntax (and a corresponding Range value)
- Work towards a signature syntax
Implementing the Range syntax resulted in cleaning up how operators in
the core syntax works. There are now two kinds of infix operators
- tight operators (`.` and `..`)
- loose operators
Tight operators may not be interspersed (`$it.left..$it.right` is a
syntax error). Loose operators require whitespace on both sides of the
operator, and can be arbitrarily interspersed. Precedence is left to
right in the core syntax.
Note that delimited syntax (like `( ... )` or `[ ... ]`) is a single
token node in the core syntax. A single token node can be parsed from
beginning to end in a context-free manner.
The rule for `.` is `<token node>.<member>`. The rule for `..` is
`<token node>..<token node>`.
Loose operators all have the same syntactic rule: `<token
node><space><loose op><space><token node>`.
The second aspect of this pull request is the beginning of support for a
signature syntax. Before implementing signatures, a necessary
prerequisite is for the core syntax to support multi-line programs.
That work establishes a few things:
- `;` and newlines are handled in the core grammar, and both count as
"separators"
- line comments begin with `#` and continue until the end of the line
In this commit, multi-token productions in the core grammar can use
separators interchangably with spaces. However, I think we will
ultimately want a different rule preventing separators from occurring
before an infix operator, so that the end of a line is always
unambiguous. This would avoid gratuitous differences between modules and
repl usage.
We already effectively have this rule, because otherwise `x<newline> |
y` would be a single pipeline, but of course that wouldn't work.
`left =~ right` return true if left contains right, using Rust's
`String::contains`. `!~` is the negated version.
A new `apply_operator` function is added which decouples evaluation from
`Value::compare`. This returns a `Value` and opens the door to
implementing `+` for example, though it wouldn't be useful immediately.
The `operator!` macro had to be changed slightly as it would choke on
`~` in arguments.