* Fixing the flag issue
* whoops, forgot the original point of the function
* Update deparse.rs
* Update deparse.rs
* Update deparse.rs
* maybe this might work
* fmt
* quotation marks works now due to a rigorous check for args.
* fmt and clippy
* kept the original escape_quote_string(), escaped " and \
* removed script.nu
* Added appropriate comments.
* WIP: Start laying overlays
* Rename Overlay->Module; Start adding overlay
* Revamp adding overlay
* Add overlay add tests; Disable debug print
* Fix overlay add; Add overlay remove
* Add overlay remove tests
* Add missing overlay remove file
* Add overlay list command
* (WIP?) Enable overlays for env vars
* Move OverlayFrames to ScopeFrames
* (WIP) Move everything to overlays only
ScopeFrame contains nothing but overlays now
* Fix predecls
* Fix wrong overlay id translation and aliases
* Fix broken env lookup logic
* Remove TODOs
* Add overlay add + remove for environment
* Add a few overlay tests; Fix overlay add name
* Some cleanup; Fix overlay add/remove names
* Clippy
* Fmt
* Remove walls of comments
* List overlays from stack; Add debugging flag
Currently, the engine state ordering is somehow broken.
* Fix (?) overlay list test
* Fix tests on Windows
* Fix activated overlay ordering
* Check for active overlays equality in overlay list
This removes the -p flag: Either both parser and engine will have the
same overlays, or the command will fail.
* Add merging on overlay remove
* Change help message and comment
* Add some remove-merge/discard tests
* (WIP) Track removed overlays properly
* Clippy; Fmt
* Fix getting last overlay; Fix predecls in overlays
* Remove merging; Fix re-add overwriting stuff
Also some error message tweaks.
* Fix overlay error in the engine
* Update variable_completions.rs
* Adds flags and optional arguments to view-source (#5446)
* added flags and optional arguments to view-source
* removed redundant code
* removed redundant code
* fmt
* fix bug in shell_integration (#5450)
* fix bug in shell_integration
* add some comments
* enable cd to work with directory abbreviations (#5452)
* enable cd to work with abbreviations
* add abbreviation example
* fix tests
* make it configurable
* make cd recornize symblic link (#5454)
* implement seq char command to generate single character sequence (#5453)
* add tmp code
* add seq char command
* Add split number flag in `split row` (#5434)
Signed-off-by: Yuheng Su <gipsyh.icu@gmail.com>
* Add two more overlay tests
* Add ModuleId to OverlayFrame
* Fix env conversion accidentally activating overlay
It activated overlay from permanent state prematurely which would
cause `overlay add` to misbehave.
* Remove unused parameter; Add overlay list test
* Remove added traces
* Add overlay commands examples
* Modify TODO
* Fix $nu.scope iteration
* Disallow removing default overlay
* Refactor some parser errors
* Remove last overlay if no argument
* Diversify overlay examples
* Make it possible to update overlay's module
In case the origin module updates, the overlay add loads the new module,
makes it overlay's origin and applies the changes. Before, it was
impossible to update the overlay if the module changed.
Co-authored-by: JT <547158+jntrnr@users.noreply.github.com>
Co-authored-by: pwygab <88221256+merelymyself@users.noreply.github.com>
Co-authored-by: Darren Schroeder <343840+fdncred@users.noreply.github.com>
Co-authored-by: WindSoilder <WindSoilder@outlook.com>
Co-authored-by: Yuheng Su <gipsyh.icu@gmail.com>
Prior to this change we would recover the names for known
externals by looking up the span in the engine state. This would fail
when using an alias for two reasons:
1. In cases where we don't have a subcommand, like this:
```
>>> extern bat [filename: string]
>>> alias b = bat
>>> bat some_file
'b' is not recognized as an internal or external command,
operable program or batch file.
```
The problem is that after alias expansion, we replace the span of the
expanded name with the original alias (this is done to alleviate
non-related issues). The span contents we look up therefore contain `b`,
the alias, instead of the expanded command name.
2. In cases where there's a subcommand:
```
>>> alias g = git
>>> g push
thread 'main' panicked at 'internal error: span missing in file contents cache', crates\nu-protocol\src\engine\engine_state.rs:474:9
note: run with `RUST_BACKTRACE=1` environment variable to display a
backtrace
```
In this case, the span in call starts where the expansion for the `g`
alias is defined and end after `push` on the last command entered. This
is not a proper span and causes a panic when we try to look it up. Note
that this is the case for all expanded aliases that involve a
subcommand, but we never actually try to retrieve the contents for that
span in other cases.
Anyway, the new way of looking up the name is arguably cleaner
regardless of the issues mentioned above. But it's nice that it fixes
them too.
Co-authored-by: Hristo Filaretov <h.filaretov@protonmail.com>
* Initial implementation of ordered call args
* Run cargo fmt
* Fix some clippy lints
* Add positional len and nth
* Cargo fmt
* Remove more old nth calls
* Good ole rustfmt
* Add named len
Co-authored-by: Hristo Filaretov <h.filaretov@protonmail.com>
* Include license text in all crates
Three crates already have license texts, so I'm keeping them, but
symlinking the `LICENSE` from the top level to the rest of the crate
directories. This works as long as `cargo publish` is done on a Unix-y
system and not Windows.
Also bump the copyright year to end in 2022.
Signed-off-by: Michel Alexandre Salim <salimma@fedoraproject.org>
* Replace symlinks
Co-authored-by: sholderbach <sholderbach@users.noreply.github.com>
Previously, the parser tried to look up the predecl also in the
permanent state and if a definition with that name already existed, it
would try to update it, which is illegal.
* Add alias interning
Now, AliasId is used to reference aliases stored in EngineState, similar
to decls, blocks, etc.
* Fix wrong message
* Fix using decl instead of alias
* Extend also alias id visibility
* Merge also aliases from delta
* Add alias hiding code
Does not work yet but passes tests at least.
* Fix wrong alias lookup and visibility appending
* Add hide alias tests
* Fmt & Clippy
* Fix random clippy warnings in "which" command
* Switch to short-names when the path is a relative_path (a dir) and exit with an error if the path does not exist
* Remove debugging print line
* Show relative filenames... It does not work yet for ls ../
* Try something else to fix relative paths... it works, but the ../ code part is not very pretty
* Add canonicalize check and remove code clones
* Fix the canonicalize_with issue pointed out by kubouch. Not sure the prefix_str is what kubouch suggested
* Fix the canonicalize_with issue pointed out by kubouch. Not sure the prefix_str is what kubouch suggested
* Add single-dot expansion to nu-path
* Move value path expansion from parser to eval
Fixes#745
* Remove single dot expansion from parser
It is not necessary since it will get expanded anyway in the eval.
* Fix ls to display globs with relative paths
* Use pathdiff crate to get relative paths for ls
Co-authored-by: Stefan Stanciulescu <contact@stefanstanciulescu.com>
* Make env var eval order during "use" deterministic
Fixes#726.
* Merge delta after getting config
To make sure env vars are all in the engine state and not in the stack.
* Remember environment variables from previous scope
* Re-introduce env var hiding
Right now, hiding decls is broken
* Re-introduce hidden field of import patterns
All tests pass now.
* Remove/Address tests TODOs
* Fix test typo; Report hiding error
* Add a few more tests
* Fix wrong expected test result
It's no longer attached to a Block. Makes access to overlays more
streamlined by removing this one indirection. Also makes it easier to
create standalone overlays without a block which might come in handy.
* Add 'expor env' dummy command
* (WIP) Abstract away module exportables as Overlay
* Switch to Overlays for use/hide
Works for decls only right now.
* Fix passing import patterns of hide to eval
* Simplify use/hide of decls
* Add ImportPattern as Expr; Add use env eval
Still no parsing of "export env" so I can't test it yet.
* Refactor export parsing; Add InternalError
* Add env var export and activation; Misc changes
Now it is possible to `use` env var that was exported from a module.
This commit also adds some new errors and other small changes.
* Add env var hiding
* Fix eval not recognizing hidden decls
Without this change, calling `hide foo`, the evaluator does not know
whether a custom command named "foo" was hidden during parsing,
therefore, it is not possible to reliably throw an error about the "foo"
name not found.
* Add use/hide/export env var tests; Cleanup; Notes
* Ignore hide env related tests for now
* Fix main branch merge mess
* Fixed multi-word export def
* Fix hiding tests on Windows
* Remove env var hiding for now
Typing `selector -qa` into nu would cause a `panic!`
This was the case because the inner loop incremented the `idx`
that was only checked in the outer loop and used it to index into
`lite_cmd.parts[idx]`
With the fix we now break loop.
Co-authored-by: ahkrr <alexhk@protonmail.com>
The introduction of `use <file.nu>` added the possibility of calling
`working_set.add_file()` more than once per parse pass. Some of the
logic handling the file contents offsets prevented it from working and
hopefully, this commit fixes it.
Currently, `use spam.nu` creates a module `spam`. Therefore, after the
first `use`, it is possible to call both `use spam.nu` and `use spam`
with the same effect.
In some rare cases, the global predeclarations would clash, for example:
> module spam { export def foo [] { "foo" } }; def foo [] { "bar" }
In the example, the `foo [] { "bar" }` would get predeclared first, then
the predeclaration would be overwritten and consumed by `foo [] {"foo"}`
inside the module, then when parsing the actual `foo [] { "bar" }`, it
would not find its predeclaration.
* Fixes a panic with defining two commands with the same name caused by
declaration not found after predeclaration.
* Adds a new error if a custom command is defined more than once in one
block.
* Add some tests
* Hiding logic is simplified and fixed so you can hide and unhide the
same def repeatedly.
* Separates predeclared ids into its own data structure to protect them
from hiding. Otherwise, you could hide the predeclared variable and
the actual def would panic.
* compiles on nightly now. (breaking change)
* less deps
* Switch over to new resolver
(it's been stable for a while.)
* let's leave num-format for another PR
It is implemented as a preliminary check when parsing a call and relies
on a fact that a token that successfully parses as a range is unlikely
to be a valid path or command name.
* Expand path when converting value -> PathBuf
Also includes Tagged<PathBuf>.
Fixes#3605
* Expand path for PATH env. variable
Fixes#1834
* Remove leftover Cows after nu-path refactor
There were some unnecessary Cow conversions leftover from the old
nu-path implementation.
* Use canonicalize in source command; Improve errors
Previously, `source` used `expand_path()` which does not follow
symlinks.
As a follow up, I improved the source error messages so they now tell
why the source file could not be canonicalized or read into string.
This means that commands cannot start with these characters.
However, we get the following benefits:
* Negative numbers > -10
* Ranges with negative numbers > -10..-1
* Left-unbounded ranges > ..10
* Resolve rebase artifacts
* Remove leftover dependencies on removed feature
* Remove unnecessary 'pub'
* Start taking notes and fooling around
* Split canonicalize to two versions; Add TODOs
One that takes `relative_to` and one that doesn't.
More TODO notes.
* Merge absolutize to and rename resolve_dots
* Add custom absolutize fn and use it in path expand
* Convert a couple of dunce::canonicalize to ours
* Update nu-path description
* Replace all canonicalize with nu-path version
* Remove leftover dunce dependencies
* Fix broken autocd with trailing slash
Trailing slash is preserved *only* in paths that do not contain "." or
"..". This should be fixed in the future to cover all paths but for now
it at least covers basic cases.
* Use dunce::canonicalize for canonicalizing
* Alow cd recovery from non-existent cwd
* Disable removed canonicalize functionality tests
Remove unused import
* Break down nu-path into separate modules
* Remove unused public imports
* Remove abundant cow mapping
* Fix clippy warning
* Reformulate old canonicalize tests to expand_path
They wouldn't work with the new canonicalize.
* Canonicalize also ~ and ndots; Unify path joining
Also, add doc comments in nu_path::expansions.
* Add comment
* Avoid expanding ndots if path is not valid UTF-8
With this change, no lossy path->string conversion should happen in the
nu-path crate.
* Fmt
* Slight expand_tilde refactor; Add doc comments
* Start nu-path integration tests
* Add tests TODO
* Fix docstring typo
* Fix some doc strings
* Add README for nu-path crate
* Add a couple of canonicalize tests
* Add nu-path integration tests
* Add trim trailing slashes tests
* Update nu-path dependency
* Remove unused import
* Regenerate lockfile
* chore: Replace surf with reqwest
Removes a lot of older, duplication versions of some dependencies
(roughtly 90 dependencies removed in total)
* chore: Remove syn 0.11
* chore: Remove unnecessary features from ptree
Removes some more duplicate dependencies
* cargo update
* Ensure we run the fetch and post plugins on the tokio runtime
* Fix clippy warning
* fix: Github requires a user agent on requests
Co-authored-by: Darren Schroeder <343840+fdncred@users.noreply.github.com>
* Allow different names for ...rest
* Resolves#3945
* This change requires an explicit name for the rest argument in `WholeStreamCommand`,
which is why there are so many changed files.
* Remove redundant clone
* Add tests
Given we can write nu scripts. As the codebase grows, splitting into many smaller nu scripts is necessary.
In general, when we work with paths and files we seem to face quite a few difficulties. Here we just tackle one of them and it involves sourcing
files that also source other nu files and so forth. The current working directory becomes important here and being on a different directory
when sourcing scripts will not work. Mostly because we expand the path on the current working directory and parse the files when a source command
call is done.
For the moment, we introduce a `lib_dirs` configuration value and, unfortunately, introduce a new dependency in `nu-parser` (`nu-data`) to get
a handle of the configuration file to retrieve it. This should give clues and ideas as the new parser engine continues (introduce a way to also know paths)
With this PR we can do the following:
Let's assume we want to write a nu library called `my_library`. We will have the code in a directory called `project`: The file structure will looks like this:
```
project/my_library.nu
project/my_library/hello.nu
project/my_library/name.nu
```
This "pattern" works well, that is, when creating a library have a directory named `my_library` and next to it a `my_library.nu` file. Filling them like this:
```
source my_library/hello.nu
source my_library/name.nu
```
```
def hello [] {
"hello world"
}
```
```
def name [] {
"Nu"
end
```
Assuming this `project` directory is stored at `/path/to/lib/project`, we can do:
```
config set lib_dirs ['path/to/lib/project']
```
Given we have this `lib_dirs` configuration value, we can be anywhere while using Nu and do the following:
```
source my_library.nu
echo (hello) (name)
```
* Allow sourcing paths with emojis
* Add source command tests for emoji paths
* Fmt
* Disable source tests on Windows with illegal paths
* Test sourcing also ASCII and single-quoted paths
Some environment variables, such as `RUST_LOG` include equals signs. Nushell
should support this in the shorthand environment variable syntax so that
developers using these variables can control them easily. We accomplish this by
swapping `std::str::split` for `std::str::splitn`, which ensures that we only
consider the first equals sign in the string instead of all of them, which we
did previously.
Closes#3867
In Nu we have variables (E.g. $var-name) and these contain `Value` types.
This means we can bind to variables any structured data and column path syntax
(E.g. `$variable.path.to`) allows flexibility for "querying" said structures.
Here we offer completions for these. For example, in a Nushell session the
variable `$nu` contains environment values among other things. If we wanted to
see in the screen some environment variable (say the var `SHELL`) we do:
```
> echo $nu.env.SHELL
```
with completions we can now do: `echo $nu.env.S[\TAB]` and we get suggestions
that start at the column path `$nu.env` with vars starting with the letter `S`
in this case `SHELL` appears in the suggestions.
* Fix parser expanding dots where it shouldn't
Previously, the parser would expand "a...b" as "a../..b". Now, >2 dots
are only expanded when the whole path component consists of dots (i.e.,
"..." expands to "../.." while "a...b" stays as it is).
* Respect OS separator when expanding >2 dots
"..." now expands to either "../.." or "..\..", based on the host OS.
Not all lite command's first part denotes a real command (for cases like sub commands that it's second lite part accounts for the command name). This is important so that we can refer to it's span correctly. Here we fix to include the command's name span correctly whether it's a command or sub command when reporting missing flag errors.
* Add first prototype of functionality to parse numbers in parantheses (). Needs testing and more wide-testing+integration
* Fix styling issue
* Try something else by copying existing matching code
* Fix formatting
* Fix the parser to accept numbers in paranthesis. Not really happy with the code, but let's see
* Refactor to only use once the parsing of strings into numbers
* Remove errors that are not used
* Fix formatting
Co-authored-by: Stefan Stanciulescu <contact@stefanstanciulescu.com>
* remove parking_lot crate from nu-data as it is no longer being used
* remove commented out code from parse.rs
* remove commented out code from scope.rs
* Use expand_path to handle the path including tilda
* Publish path::expand_path for using in nu-command
* cargo fmt
Co-authored-by: Wataru Yamaguchi <nagisamark2@gmail.com>
* Move tests into own file
* Move data structs to own file
* Move functions parsing 1 Token (primitives) into own file
* Rename param_flag_list to signature
* Add tests
* Fix clippy lint
* Change imports to new lexer structure
* Document the lexer and lightly improve its names
The bulk of this pull request adds a substantial amount of new inline
documentation for the lexer. Along the way, I made a few minor changes
to the names in the lexer, most of which were internal.
The main change that affects other files is renaming `group` to `block`,
since the function is actually parsing a block (a list of groups).
* Further clean up the lexer
- Consolidate the logic of the various token builders into a single type
- Improve and clean up the event-driven BlockParser
- Clean up comment parsing. Comments now contain their original leading
whitespace as well as trailing whitespace, and know how to move some
leading whitespace back into the body based on how the lexer decides
to dedent the comments. This preserves the original whitespace
information while still making it straight-forward to eliminate leading
whitespace in help comments.
* Update meta.rs
* WIP
* fix clippy
* remove unwraps
* remove unwraps
Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com>
Co-authored-by: Jonathan Turner <jonathan.d.turner@gmail.com>
* Add possibility to declare optional parameters and switch flags
With this commit applied it is now possible to specify optional parameters and flags
as switches. This PR **only** makes guarantees about **parsing** optional flags and
switches correctly. This PR **does not guarantee flawless functionality** of
optional parameters / switches within scripts.
functionality within scripts. Example:
test.nu
```shell
def my_command [
opt_param?
opt_param2?: int
--switch
] {echo hi nushell}
```
```shell
> source test.nu
> my_command -h
───┬─────────
0 │ hi
1 │ nushell
───┴─────────
Usage:
> my_command <mandatory_param> (opt_param) (opt_param2) {flags}
Parameters:
<mandatory_param>
(opt_param)
(opt_param2)
Flags:
-h, --help: Display this help message
--switch
--opt_flag <any>
```
* Update def docs
* Add rest arg to def
This commit applied adds the ability to define the rest parameter of a def
command. It does not implement the functionality to expand the rest argument in
a user defined def function.
The rest argument has to be exactly worded "...rest".
Example after this PR is applied:
file test.nu
```shell
def my_command [
...rest:int # My rest arg
] {
echo 1 2 3
}
```
```shell
> source test.nu
> my_command -h
Usage:
> my_command ...args {flags}
Parameters:
...args: My rest arg
Flags:
-h, --help: Display this help message
```
* Fix space in help on wrong side
* Remove wrong test case
* Parse long and shortflags without space correctly
* Update param_flag_list.rs
* Update param_flag_list.rs
Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com>
* move commands, futures.rs, script.rs, utils
* move over maybe_print_errors
* add nu_command crate references to nu_cli
* in commands.rs open up to pub mod from pub(crate)
* nu-cli, nu-command, and nu tests are now passing
* cargo fmt
* clean up nu-cli/src/prelude.rs
* code cleanup
* for some reason lex.rs was not formatted, may be causing my error
* remove mod completion from lib.rs which was not being used along with quickcheck macros
* add in allow unused imports
* comment out one failing external test; comment out one failing internal test
* revert commenting out failing tests; something else might be going on; someone with a windows machine should check and see what is going on with these failing windows tests
* Update Cargo.toml
Extend the optional features to nu-command
Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com>
* Put parse_definition related funcs into own module
* Add failing lexer test
* Implement Parsing of definition signature
This commit applied changes how the signature of a function is parsed. Before
there was a little bit of "quick-and-dirty" string-matching/parsing involved.
Now, a signature is a little bit more properly parsed.
The grammar of a definition signature understood by these parsing-functions is
as follows:
`[ (parameter | flag | <eol>)* ]`
where
parameter is:
`name (<:> type)? (<,> | <eol> | (#Comment <eol>))?`
flag is:
`--name (-shortform)? (<:> type)? (<,> | <eol> | (#Comment <eol>))?`
(Note: After the last item no <,> has to come.)
Note: It is now possible to pass comments to flags and parameters
Example:
[
d:int # The required d parameter
--x (-x):string # The all powerful x flag
--y (-y):int # The accompanying y flag
]
(Sadly there seems to be a bug (Or is this expected behaviour?) in the lexer, because of which `--x(-x)` would
be treated as one baseline token and is therefore not correctly recognized as 2. For
now a space has to be inserted)
During the implementation of the module, 2 question arose:
Should flag/parameter names be allowed to be type names?
Example case:
```shell
def f [ string ] { echo $string }
```
Currently an error is thrown
* Fix clippy lints
* Remove wrong comment
* Add spacing
* Add Cargo.lock
This commit applied adds comments preceding a command to the LiteCommands new
field `comments`.
This can be usefull for example when defining a function with `def`. Nushell
could pick up the comments and display them when the user types `help my_def_func`.
Example
```shell
def my_echo [arg] { echo $arg }
```
The LiteCommand def will now contain the comments `My echo` and `It's much
better :)`.
The comment is not associated with the next command if there is a (or multiple) newline
between them.
Example
```shell
echo 42
```
This new functionality is similar to DocStrings. One might introduce a special
notation for such DocStrings, so that the parser can differentiate better
between discardable comments and usefull documentation.
* Update dependencies
* Document the lexer and lightly improve its names
The bulk of this pull request adds a substantial amount of new inline
documentation for the lexer. Along the way, I made a few minor changes
to the names in the lexer, most of which were internal.
The main change that affects other files is renaming `group` to `block`,
since the function is actually parsing a block (a list of groups).
* Fix rustfmt
* Update lock
Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com>
Co-authored-by: Jonathan Turner <jonathan.d.turner@gmail.com>
* Begin allowing comments and multiline scripts.
* clippy
* Finish moving to groups. Test pass
* Keep going
* WIP
* WIP
* BROKEN WIP
* WIP
* WIP
* Fix more tests
* WIP: alias starts working
* Broken WIP
* Broken WIP
* Variables begin to work
* captures start working
* A little better but needs fixed scope
* Shorthand env setting
* Update main merge
* Broken WIP
* WIP
* custom command parsing
* Custom commands start working
* Fix coloring and parsing of block
* Almost there
* Add some tests
* Add more param types
* Bump version
* Fix benchmark
* Fix stuff
* Add parser improvements
Previously everything starting with "$" was parsed as a column path.
With this commit applied, the lite_arg starting with $ is parsed as
the most appropriate thing
- $true/$false ==> Expression::Boolean
- $(...) ==> Invocation
- $it ==> ColumnPath
- Anything with at least one '.' ==> ColumnPath
- Anything else ==> Variable
* Ignore failing tests
* Implement exclusive and inclusive ranges with .. and ..=
This commit adds right-exclusive ranges.
The original a..b inclusive syntax was changed to reflect the Rust notation.
New a..=b syntax was introduced to have the old behavior.
Currently, both a.. and b..= is valid, and it is unclear whether it's valid
to impose restrictions.
The original issue suggests .. for inclusive and ..< for exclusive ranges,
this can be implemented by making simple changes to this commit.
* Fix collect tests by changing ranges to ..=
* Fix clippy lints in exclusive range matching
* Implement exclusive ranges using `..<`
* remove unused dependencies
* moved umask to cfg(unix)
* changed Inflector to inflector, hoping it fixes the issue.
* roll back Inflector
* removed commented out deps now that everything looks good.
Previously, lite parse would stack up opening delimiters in vec, and if we
didn't close everything off, it would simply return an error with a partial form
that didn't include the missing closing delimiters. This commits adds those
delimiters so that `classify_block` can parse correctly.
The completion engine maps completion locations to spans on a line, which
indicate whther to complete a command name, flag name, argument, and so on.
Initial implementation is simplistic, with some rough edges, since it relies
heavily on the parser's interpretation. For example
du -
if asking for completions, `-` is considered a positional argument by the
parser, but the user is likely looking for a flag. These scenarios will be
addressed in a series of progressive enhancements to the engine.
* Add deserialization of Primitive::Duration; Fixes#2373
* Implement Sleep command
* Add comment saying you should name your rest field "rest"
* Fix typo
* Add documentation for sleep command
* Changed time units as outlined in issue #2353.
Also applied changes to to_str for Unit - not sure if that was what was wanted.
* Forgot the tests!
* Updated primitive.rs to match changes.
* Updated where example to match changes.
* And the html test!
* Move lite_parse tests into a submodule
* Have lite_parse return partial parses when error encountered.
Although a parse fails, we can generally still return what was successfully
parsed. This is useful, for example, when figuring out completions at some
cursor position, because we can map the cursor to something more structured
(e.g., cursor is at a flag name).
Our own custom escaping unfortunately is far too simple to cover all cases.
Instead, the parser will now do no transforms on the args passed to an external
command, letting the process spawning library deal with doing the appropriate
escaping.