It's no longer attached to a Block. Makes access to overlays more
streamlined by removing this one indirection. Also makes it easier to
create standalone overlays without a block which might come in handy.
* Add 'expor env' dummy command
* (WIP) Abstract away module exportables as Overlay
* Switch to Overlays for use/hide
Works for decls only right now.
* Fix passing import patterns of hide to eval
* Simplify use/hide of decls
* Add ImportPattern as Expr; Add use env eval
Still no parsing of "export env" so I can't test it yet.
* Refactor export parsing; Add InternalError
* Add env var export and activation; Misc changes
Now it is possible to `use` env var that was exported from a module.
This commit also adds some new errors and other small changes.
* Add env var hiding
* Fix eval not recognizing hidden decls
Without this change, calling `hide foo`, the evaluator does not know
whether a custom command named "foo" was hidden during parsing,
therefore, it is not possible to reliably throw an error about the "foo"
name not found.
* Add use/hide/export env var tests; Cleanup; Notes
* Ignore hide env related tests for now
* Fix main branch merge mess
* Fixed multi-word export def
* Fix hiding tests on Windows
* Remove env var hiding for now
Typing `selector -qa` into nu would cause a `panic!`
This was the case because the inner loop incremented the `idx`
that was only checked in the outer loop and used it to index into
`lite_cmd.parts[idx]`
With the fix we now break loop.
Co-authored-by: ahkrr <alexhk@protonmail.com>
The introduction of `use <file.nu>` added the possibility of calling
`working_set.add_file()` more than once per parse pass. Some of the
logic handling the file contents offsets prevented it from working and
hopefully, this commit fixes it.
Currently, `use spam.nu` creates a module `spam`. Therefore, after the
first `use`, it is possible to call both `use spam.nu` and `use spam`
with the same effect.
In some rare cases, the global predeclarations would clash, for example:
> module spam { export def foo [] { "foo" } }; def foo [] { "bar" }
In the example, the `foo [] { "bar" }` would get predeclared first, then
the predeclaration would be overwritten and consumed by `foo [] {"foo"}`
inside the module, then when parsing the actual `foo [] { "bar" }`, it
would not find its predeclaration.
* Fixes a panic with defining two commands with the same name caused by
declaration not found after predeclaration.
* Adds a new error if a custom command is defined more than once in one
block.
* Add some tests
* Hiding logic is simplified and fixed so you can hide and unhide the
same def repeatedly.
* Separates predeclared ids into its own data structure to protect them
from hiding. Otherwise, you could hide the predeclared variable and
the actual def would panic.
* compiles on nightly now. (breaking change)
* less deps
* Switch over to new resolver
(it's been stable for a while.)
* let's leave num-format for another PR
It is implemented as a preliminary check when parsing a call and relies
on a fact that a token that successfully parses as a range is unlikely
to be a valid path or command name.
* Expand path when converting value -> PathBuf
Also includes Tagged<PathBuf>.
Fixes#3605
* Expand path for PATH env. variable
Fixes#1834
* Remove leftover Cows after nu-path refactor
There were some unnecessary Cow conversions leftover from the old
nu-path implementation.
* Use canonicalize in source command; Improve errors
Previously, `source` used `expand_path()` which does not follow
symlinks.
As a follow up, I improved the source error messages so they now tell
why the source file could not be canonicalized or read into string.
This means that commands cannot start with these characters.
However, we get the following benefits:
* Negative numbers > -10
* Ranges with negative numbers > -10..-1
* Left-unbounded ranges > ..10
* Resolve rebase artifacts
* Remove leftover dependencies on removed feature
* Remove unnecessary 'pub'
* Start taking notes and fooling around
* Split canonicalize to two versions; Add TODOs
One that takes `relative_to` and one that doesn't.
More TODO notes.
* Merge absolutize to and rename resolve_dots
* Add custom absolutize fn and use it in path expand
* Convert a couple of dunce::canonicalize to ours
* Update nu-path description
* Replace all canonicalize with nu-path version
* Remove leftover dunce dependencies
* Fix broken autocd with trailing slash
Trailing slash is preserved *only* in paths that do not contain "." or
"..". This should be fixed in the future to cover all paths but for now
it at least covers basic cases.
* Use dunce::canonicalize for canonicalizing
* Alow cd recovery from non-existent cwd
* Disable removed canonicalize functionality tests
Remove unused import
* Break down nu-path into separate modules
* Remove unused public imports
* Remove abundant cow mapping
* Fix clippy warning
* Reformulate old canonicalize tests to expand_path
They wouldn't work with the new canonicalize.
* Canonicalize also ~ and ndots; Unify path joining
Also, add doc comments in nu_path::expansions.
* Add comment
* Avoid expanding ndots if path is not valid UTF-8
With this change, no lossy path->string conversion should happen in the
nu-path crate.
* Fmt
* Slight expand_tilde refactor; Add doc comments
* Start nu-path integration tests
* Add tests TODO
* Fix docstring typo
* Fix some doc strings
* Add README for nu-path crate
* Add a couple of canonicalize tests
* Add nu-path integration tests
* Add trim trailing slashes tests
* Update nu-path dependency
* Remove unused import
* Regenerate lockfile
* chore: Replace surf with reqwest
Removes a lot of older, duplication versions of some dependencies
(roughtly 90 dependencies removed in total)
* chore: Remove syn 0.11
* chore: Remove unnecessary features from ptree
Removes some more duplicate dependencies
* cargo update
* Ensure we run the fetch and post plugins on the tokio runtime
* Fix clippy warning
* fix: Github requires a user agent on requests
Co-authored-by: Darren Schroeder <343840+fdncred@users.noreply.github.com>
* Allow different names for ...rest
* Resolves#3945
* This change requires an explicit name for the rest argument in `WholeStreamCommand`,
which is why there are so many changed files.
* Remove redundant clone
* Add tests
Given we can write nu scripts. As the codebase grows, splitting into many smaller nu scripts is necessary.
In general, when we work with paths and files we seem to face quite a few difficulties. Here we just tackle one of them and it involves sourcing
files that also source other nu files and so forth. The current working directory becomes important here and being on a different directory
when sourcing scripts will not work. Mostly because we expand the path on the current working directory and parse the files when a source command
call is done.
For the moment, we introduce a `lib_dirs` configuration value and, unfortunately, introduce a new dependency in `nu-parser` (`nu-data`) to get
a handle of the configuration file to retrieve it. This should give clues and ideas as the new parser engine continues (introduce a way to also know paths)
With this PR we can do the following:
Let's assume we want to write a nu library called `my_library`. We will have the code in a directory called `project`: The file structure will looks like this:
```
project/my_library.nu
project/my_library/hello.nu
project/my_library/name.nu
```
This "pattern" works well, that is, when creating a library have a directory named `my_library` and next to it a `my_library.nu` file. Filling them like this:
```
source my_library/hello.nu
source my_library/name.nu
```
```
def hello [] {
"hello world"
}
```
```
def name [] {
"Nu"
end
```
Assuming this `project` directory is stored at `/path/to/lib/project`, we can do:
```
config set lib_dirs ['path/to/lib/project']
```
Given we have this `lib_dirs` configuration value, we can be anywhere while using Nu and do the following:
```
source my_library.nu
echo (hello) (name)
```
* Allow sourcing paths with emojis
* Add source command tests for emoji paths
* Fmt
* Disable source tests on Windows with illegal paths
* Test sourcing also ASCII and single-quoted paths
Some environment variables, such as `RUST_LOG` include equals signs. Nushell
should support this in the shorthand environment variable syntax so that
developers using these variables can control them easily. We accomplish this by
swapping `std::str::split` for `std::str::splitn`, which ensures that we only
consider the first equals sign in the string instead of all of them, which we
did previously.
Closes#3867
In Nu we have variables (E.g. $var-name) and these contain `Value` types.
This means we can bind to variables any structured data and column path syntax
(E.g. `$variable.path.to`) allows flexibility for "querying" said structures.
Here we offer completions for these. For example, in a Nushell session the
variable `$nu` contains environment values among other things. If we wanted to
see in the screen some environment variable (say the var `SHELL`) we do:
```
> echo $nu.env.SHELL
```
with completions we can now do: `echo $nu.env.S[\TAB]` and we get suggestions
that start at the column path `$nu.env` with vars starting with the letter `S`
in this case `SHELL` appears in the suggestions.
* Fix parser expanding dots where it shouldn't
Previously, the parser would expand "a...b" as "a../..b". Now, >2 dots
are only expanded when the whole path component consists of dots (i.e.,
"..." expands to "../.." while "a...b" stays as it is).
* Respect OS separator when expanding >2 dots
"..." now expands to either "../.." or "..\..", based on the host OS.
Not all lite command's first part denotes a real command (for cases like sub commands that it's second lite part accounts for the command name). This is important so that we can refer to it's span correctly. Here we fix to include the command's name span correctly whether it's a command or sub command when reporting missing flag errors.
* Add first prototype of functionality to parse numbers in parantheses (). Needs testing and more wide-testing+integration
* Fix styling issue
* Try something else by copying existing matching code
* Fix formatting
* Fix the parser to accept numbers in paranthesis. Not really happy with the code, but let's see
* Refactor to only use once the parsing of strings into numbers
* Remove errors that are not used
* Fix formatting
Co-authored-by: Stefan Stanciulescu <contact@stefanstanciulescu.com>