* Add the load-env command
load-env can be used to add environment variables dynamically via an
InputStream. This allows developers to create tools that output environment
variables as key-value pairs, then have the user load those variables in using
load-env. This supplants most of the need for an `eval` command, which is
mostly used in POSIX envs for setting env vars.
Fixes#3481
* fixup! Add the load-env command
* Revert "History, more test coverage improvements, and refactorings. (#3217)"
This reverts commit 8fc8fc89aa.
* Add tests
* Refactor .nu-env
* Change logic of Config write to logic of read()
* Fix reload always appends to old vars
* Fix reload always takes last_modified of global config
* Add reload_config in evaluation context
* Reload config after writing to it in cfg set / cfg set_into
* Add --no-history to cli options
* Use --no-history in tests
* Add comment about maybe_print_errors
* Get ctrl_exit var from context.global_config
* Use context.global_config in command "config"
* Add Readme in engine how env vars are now handled
* Update docs from autoenv command
* Move history_path from engine to nu_data
* Move load history out of if
* No let before return
* Add import for indexmap
Improvements overall to Nu. Also among the changes here, we can also be more confident towards incorporating `3041`. End to end tests for checking envs properly exported to externals is not added here (since it's in the other PR)
A few things added in this PR (probably forgetting some too)
* no writes happen to history during test runs.
* environment syncing end to end coverage added.
* clean up / refactorings few areas.
* testing API for finer control (can write tests passing more than one pipeline)
* can pass environment variables in tests that nu will inherit when running.
* No longer needed.
* no longer under a module. No need to use super.
* Playground infraestructure (tests, etc) additions.
A few things to note:
* Nu can be started with a custom configuration file (`nu --config-file /path/to/sample_config.toml`). Useful for mocking the configuration on test runs.
* When given a custom configuration file Nu will save any changes to the file supplied appropiately.
* The `$nu.config-path` variable either shows the default configuration file (or the custom one, if given)
* We can now run end to end tests with finer grained control (currently, since this is baseline work, standard out) This will allow to check things like exit status, assert the contents with a format, etc)
* Remove (for another PR)
* update docs to refer to length instead of count
* rename count to length
* change all occurrences of 'count' to 'length' in tests
* format length command
The autoenv logic mutates environment variables in the running session as
it operates and decides what to do for trusted directories containing `.nu-env`
files. Few of the ways to interact with it were all in a single test function.
We separate out all the ways that were done in the single test function to document
it better. This will greatly help once we start refactoring our way out from setting
environment variables this way to just setting them to `Scope`.
This is part of an on-going effort to keep variables (`PATH` and `ENV`)
in our `Scope` and rely on it for everything related to variables.
We expect to move away from setting (`std::*`) envrironment variables in the current
running process. This is non-trivial since we need to handle cases from vars
coming in from the outside world, prioritize, and also compare to the ones
we have both stored in memory and in configuration files.
Also to send out our in-memory (in `Scope`) variables properly to external
programs once we no longer rely on `std::env` vars from the running process.
* Begin allowing comments and multiline scripts.
* clippy
* Finish moving to groups. Test pass
* Keep going
* WIP
* WIP
* BROKEN WIP
* WIP
* WIP
* Fix more tests
* WIP: alias starts working
* Broken WIP
* Broken WIP
* Variables begin to work
* captures start working
* A little better but needs fixed scope
* Shorthand env setting
* Update main merge
* Broken WIP
* WIP
* custom command parsing
* Custom commands start working
* Fix coloring and parsing of block
* Almost there
* Add some tests
* Add more param types
* Bump version
* Fix benchmark
* Fix stuff
We introduce the `plugin` nu sub command (`nu plugin`) with basic plugin
loading support. We can choose to load plugins from a directory. Originally
introduced to make integration tests faster (by not loading any plugins on startup at all)
but `nu plugin --load some_path ; test_pipeline_that_uses_plugins_just_loaded` does not see it.
Therefore, a `nu_with_plugins!` macro for tests was introduced on top of nu`s `--skip-plugins`
switch executable which is set to true when running the integration tests that use the `nu!` macro now..
* Implement exclusive and inclusive ranges with .. and ..=
This commit adds right-exclusive ranges.
The original a..b inclusive syntax was changed to reflect the Rust notation.
New a..=b syntax was introduced to have the old behavior.
Currently, both a.. and b..= is valid, and it is unclear whether it's valid
to impose restrictions.
The original issue suggests .. for inclusive and ..< for exclusive ranges,
this can be implemented by making simple changes to this commit.
* Fix collect tests by changing ranges to ..=
* Fix clippy lints in exclusive range matching
* Implement exclusive ranges using `..<`
* Modify testcase
* Run exitscript in the folder it was specified
* Update documentation
* Add comment
* Borrow instead of clone
* Does this just... work on windows?
* fmt
* as_str
* Collapse if by order of clippy
* Support windows
* fmt
* refactor tests
* fmt
* This time it will work on windows FOR SURE
* Remove debug prints
* Comment
* Refactor tests
* fmt
* fix spelling
* update comment
* Working towards a PoC for wasm
* Move bson and sqlite to plugins
* proof of concept now working
* tests are green
* Add CI test for --no-default-features
* Fix some tests
* Fix clippy and windows build
* More fixes
* Fix the windows build
* Fix the windows test
* Fix autoenv executing scripts multiple times
Previously, if the user had only specified entry or exitscripts the scripts
would execute many times. This should be fixed now
* Add tests
* Run exitscripts
* More tests and fixes to existing tests
* Test solution with visited dirs
* Track visited directories
* Comments and fmt
* add test basic_autoenv_vars_are_added
* Tests
* Entry and exit scripts
* Recursive set and overwrite
* Make sure that overwritten vals are restored
* Move tests to autoenv
* Move tests out of cli crate
* Tests help, apparently. Windows has issues
On windows, .nu-env is not applied immediately after running autoenv trust.
You have to cd out of the directory for it to work.
* Sort paths non-lexicographically
* Sibling dir test
* Revert "Sort paths non-lexicographically"
This reverts commit 72e4b856af.
* Rename test
* Change conditions
* Revert "Revert "Sort paths non-lexicographically""
This reverts commit 71606bc62f.
* Set vars as they are discovered
This means that if a parent directory is untrusted,
the variables in its child directories are still set properly.
* format
* Fix cleanup issues too
* Run commands in their separate functions
* Make everything into one large function like all the cool kids
* Refactoring
* fmt
* Debugging windows path issue
* Canonicalize
* Trim whitespace
* On windows, use echo nul instead of touch to create file in test
* Avoid cloning by using drain()
Our own custom escaping unfortunately is far too simple to cover all cases.
Instead, the parser will now do no transforms on the args passed to an external
command, letting the process spawning library deal with doing the appropriate
escaping.
For example, when running the following:
crates/nu-cli/src
nushell currently parses this as an external command. Before running the command, we check to see if
it's a directory. If it is, we "auto cd" into that directory, otherwise we go through normal
external processing.
If we put a trailing slash on it though, shells typically interpret that as "user is explicitly
referencing directory". So
crates/nu-cli/src/
should not be interpreted as "run an external command". We intercept a trailing slash in the head
position of a command in a pipeline as such, and inject a `cd` internal command.
* from-eml initial ver
* Adding tests for `from-eml`
* Add eml to prepares_and_decorates_filesystem_source_files
* Sort the file order
Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com>
* headers plugin
* Remove plugin
* Add non-functioning headers command
* Add ability to extract headers from first row
* Refactor header extraction
* Rebuild indexmap with proper headers
* Rebuild result properly
* Compiling, probably wrapped too much?
* Refactoring
* Deal with case of empty header cell
* Deal with case of empty header cell
* Fix formatting
* Fix linting, attempt 2.
* Move whole_stream_command(Headers) to more appropriate section
* ... more linting
* Return Err(ShellError...) instead of panic, yield each row instead of entire table
* Insert Column[index] if no header info is found.
* Update error description
* Add initial test
* Add tests for headers command
* Lint test cases in headers
* Change ShellError for headers, Add sample_headers file to utils.rs
* Add empty sheet to test file
* Revert "Add empty sheet to test file"
This reverts commit a4bf38a31d.
* Show error message when given empty table
* WIP: move to bytes codec
* Progress on adding collect helpers
* Progress on adding collect helpers
* Add in line splitting back to lines
* Lines outputting line primitives
* Close to ready?
* Finish fixing lines
* clippy fixes
* fmt fixes
* removed unused code
* Cleanup a few bits
* Cleanup a few bits
* Cleanup a few more bits
* Fix failing test with corrected test case
This improves incremental build time when working on what was previously
the root package. For example, previously all plugins would be rebuilt
with a change to `src/commands/classified/external.rs`, but now only
`nu-cli` will have to be rebuilt (and anything that depends on it).
In particular, one thing that we can't (properly) do before this commit
is consuming an infinite input stream. For example:
```
yes | grep y | head -n10
```
will give 10 "y"s in most shells, but blocks indefinitely in nu. This PR
resolves that by doing blocking I/O in threads, and reducing the `await`
calls we currently have in our pipeline code.
* typo fixes
* Change signature to take in short-hand flags
* update help information
* Parse short-hand flags as their long counterparts
* lints
* Modified a couple tests to use shorthand flags
* Fixed mv not throwing error when the source path was invalid
* Fixed failing test
* Fixed another lint error
* Fix $PATH conflicts in .gitpod.Dockerfile (#1349)
- Use the correct user for gitpod Dockerfile.
- Remove unneeded packages (curl, rustc) from gitpod Dockerfile.
* Added test to check for the error
* Fixed linting error
* Fixed mv not moving files on Windows. (#1342)
Move files correctly in windows.
* Fixed mv not throwing error when the source path was invalid
* Fixed failing test
* Fixed another lint error
* Added test to check for the error
* Fixed linting error
* Changed error message
* Typo and fixed test
Co-authored-by: Sean Hellum <seanhellum45@gmail.com>
* Added attributes to from-xml command
* Added attributes as their own rows
* Removed unneccesary lifetime declarations
* from-xml now has children and attributes side by side
* Fixed tests and linting
* Fixed lint-problem
* Switch to using `shell`
Switch to using the shell for subprocess to enable more natural shelling out.
* Update external.rs
* This is a test with .shell() for external
* El pollo loco's PR
* co co co
* Attempt to fix windows
* Fmt
* Less is more?
Co-authored-by: Andrés N. Robalino <andres@androbtech.com>
Restructure and streamline token expansion
The purpose of this commit is to streamline the token expansion code, by
removing aspects of the code that are no longer relevant, removing
pointless duplication, and eliminating the need to pass the same
arguments to `expand_syntax`.
The first big-picture change in this commit is that instead of a handful
of `expand_` functions, which take a TokensIterator and ExpandContext, a
smaller number of methods on the `TokensIterator` do the same job.
The second big-picture change in this commit is fully eliminating the
coloring traits, making coloring a responsibility of the base expansion
implementations. This also means that the coloring tracer is merged into
the expansion tracer, so you can follow a single expansion and see how
the expansion process produced colored tokens.
One side effect of this change is that the expander itself is marginally
more error-correcting. The error correction works by switching from
structured expansion to `BackoffColoringMode` when an unexpected token
is found, which guarantees that all spans of the source are colored, but
may not be the most optimal error recovery strategy.
That said, because `BackoffColoringMode` only extends as far as a
closing delimiter (`)`, `]`, `}`) or pipe (`|`), it does result in
fairly granular correction strategy.
The current code still produces an `Err` (plus a complete list of
colored shapes) from the parsing process if any errors are encountered,
but this could easily be addressed now that the underlying expansion is
error-correcting.
This commit also colors any spans that are syntax errors in red, and
causes the parser to include some additional information about what
tokens were expected at any given point where an error was encountered,
so that completions and hinting could be more robust in the future.
Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com>
Co-authored-by: Andrés N. Robalino <andres@androbtech.com>
Also, this commit makes `ls` a per-item command.
A command that processes things item by item may still take some time to stream
out the results from a single item. For example, `ls` on a directory with a lot
of files could be interrupted in the middle of showing all of these files.
This commit changes the way we shell out externals when using the `"$it"` argument. Also pipes per row to an external's stdin if no `"$it"` argument is present for external commands.
Further separation of logic (preparing the external's command arguments, getting the data for piping, emitting values, spawning processes) will give us a better idea for lower level details regarding external commands until we can find the right abstractions for making them more generic and unify within the pipeline calling logic of Nu internal's and external's.
* Put a sample_data.ods file for testing
This is a copy of the sample_data.xlsx file but in ods format
* Add the from-ods command
Most of the work was doing `rg xlsx` and then copy/paste with light editing
* Add tests for the from-ods command
* Fix failing test
The problem was improper filename sorting in the test `prepares_and_decorates_filesystem_source_files`
* Clippy fixes
* Finish converting to use clippy
* fix warnings in new master
* fix windows
* fix windows
Co-authored-by: Artem Vorotnikov <artem@vorotnikov.me>
* start playing with ways to use the uniq command
* WIP
* Got uniq working, but still need to figure out args issue and add tests
* Add some tests for uniq
* fmt
* remove commented out code
* Add documentation and some additional tests showing uniq values and rows. Also removed args TODO
* add changes that didn't get committed
* whoops, I didn't save the docs correctly...
* fmt
* Add a test for uniq with nested json
* Add another test
* Fix unique-ness when json keys are out of order and make the test json more complicated
Add tests for ~tilde expansion:
- test that "~" is expanded (no more "~" in output)
- ensure that "1~1" is not expanded to "1/home/user1" as it was
before
Fixes#972
Note: the first test does not check the literal expansion because
the path on Windows is expanded as a Linux path, but the correct
expansion may come for free once `shellexpand` will use the `dirs`
crate too (https://github.com/netvl/shellexpand/issues/3).
This commit contains two improvements:
- Support for a Range syntax (and a corresponding Range value)
- Work towards a signature syntax
Implementing the Range syntax resulted in cleaning up how operators in
the core syntax works. There are now two kinds of infix operators
- tight operators (`.` and `..`)
- loose operators
Tight operators may not be interspersed (`$it.left..$it.right` is a
syntax error). Loose operators require whitespace on both sides of the
operator, and can be arbitrarily interspersed. Precedence is left to
right in the core syntax.
Note that delimited syntax (like `( ... )` or `[ ... ]`) is a single
token node in the core syntax. A single token node can be parsed from
beginning to end in a context-free manner.
The rule for `.` is `<token node>.<member>`. The rule for `..` is
`<token node>..<token node>`.
Loose operators all have the same syntactic rule: `<token
node><space><loose op><space><token node>`.
The second aspect of this pull request is the beginning of support for a
signature syntax. Before implementing signatures, a necessary
prerequisite is for the core syntax to support multi-line programs.
That work establishes a few things:
- `;` and newlines are handled in the core grammar, and both count as
"separators"
- line comments begin with `#` and continue until the end of the line
In this commit, multi-token productions in the core grammar can use
separators interchangably with spaces. However, I think we will
ultimately want a different rule preventing separators from occurring
before an infix operator, so that the end of a line is always
unambiguous. This would avoid gratuitous differences between modules and
repl usage.
We already effectively have this rule, because otherwise `x<newline> |
y` would be a single pipeline, but of course that wouldn't work.
`left =~ right` return true if left contains right, using Rust's
`String::contains`. `!~` is the negated version.
A new `apply_operator` function is added which decouples evaluation from
`Value::compare`. This returns a `Value` and opens the door to
implementing `+` for example, though it wouldn't be useful immediately.
The `operator!` macro had to be changed slightly as it would choke on
`~` in arguments.
After the previous commit, nushell uses PrettyDebug and
PrettyDebugWithSource for our pretty-printed display output.
PrettyDebug produces a structured `pretty.rs` document rather than
writing directly into a fmt::Formatter, and types that implement
`PrettyDebug` have a convenience `display` method that produces a string
(to be used in situations where `Display` is needed for compatibility
with other traits, or where simple rendering is appropriate).