The original purpose of this PR was to modernize the external parser to
use the new Shape system.
This commit does include some of that change, but a more important
aspect of this change is an improvement to the expansion trace.
Previous commit 6a7c00ea adding trace infrastructure to the syntax coloring
feature. This commit adds tracing to the expander.
The bulk of that work, in addition to the tree builder logic, was an
overhaul of the formatter traits to make them more general purpose, and
more structured.
Some highlights:
- `ToDebug` was split into two traits (`ToDebug` and `DebugFormat`)
because implementations needed to become objects, but a convenience
method on `ToDebug` didn't qualify
- `DebugFormat`'s `fmt_debug` method now takes a `DebugFormatter` rather
than a standard formatter, and `DebugFormatter` has a new (but still
limited) facility for structured formatting.
- Implementations of `ExpandSyntax` need to produce output that
implements `DebugFormat`.
Unlike the highlighter changes, these changes are fairly focused in the
trace output, so these changes aren't behind a flag.
Adds new substr function to str plugin with tests and documentation
Function takes a start/end location as a string in the form "##,##", both sides of comma are optional, and
behaves like Rust's own index operator [##..##].
a joy. Fundamentally we embrace functional programming principles for
transforming the dataset from any format picked up by Nu. This table
processing "primitive" commands will build up and make pipelines
composable with data processing capabilities allowing us the valuate,
reduce, and map, the tables as far as even composing this declartively.
On this regard, `split-by` expects some table with grouped data and we
can use it further in interesting ways (Eg. collecting labels for
visualizing the data in charts and/or suit it for a particular chart
of our interest).
This commit should finish the `coloring_in_tokens` feature, which moves
the shape accumulator into the token stream. This allows rollbacks of
the token stream to also roll back any shapes that were added.
This commit also adds a much nicer syntax highlighter trace, which shows
all of the paths the highlighter took to arrive at a particular coloring
output. This change is fairly substantial, but really improves the
understandability of the flow. I intend to update the normal parser with
a similar tracing view.
In general, this change also fleshes out the concept of "atomic" token
stream operations.
A good next step would be to try to make the parser more
error-correcting, using the coloring infrastructure. A follow-up step
would involve merging the parser and highlighter shapes themselves.
The code still compiles, so this doesn't seem to break anything. That also means
it's not critical to fix it, but having dead code around isn't great either.
Previously it would split the last column on the first separator value found
between the start of the column and the end of the row. Changing this to using
everything from the start of the column to the end of the string makes it behave
more similarly to the other columns, making it less surprising.
The table parsing/creation logic has changed from treating every line the same
to processing each line in context of the column header's placement. Previously,
lines on separate rows would go towards the same column as long as they were the
same index based on separator alone. Now, each item's index is based on vertical
alignment to the column header.
This may seem brittle, but it solves the problem of some tables operating with
empty cells that would cause remaining values to be paired with the wrong
column.
Based on kubernetes output (get pods, events), the new method has shown to have
much greater success rates for parsing.
New tests are added to test for additional cases that might be trickier to
handle with the new logic.
Old tests are updated where their expectations are no longer expected to hold true.
For instance: previously, lines would be treated separately, allowing any index
offset between columns on different rows, as long as they had the same row index
as decided by a separator. When this is no longer the case, some things need to
be adjusted.
The benefit of this is that coloring can be made atomic alongside token
stream forwarding.
I put the feature behind a flag so I can continue to iterate on it
without possibly regressing existing functionality. It's a lot of places
where the flags have to go, but I expect it to be a short-lived flag,
and the flags are fully contained in the parser.
Previously, we would build a command that looked something like this:
<ex_cmd> "$it" "&&" "<ex_cmd>" "$it"
So that the "&&" and "<ex_cmd>" would also be arguments to the command,
instead of a chained command. This commit builds up a command string
that can be passed to an external shell.
`std::fs::metadata` will attempt to follow symlinks, which results in a
"No such file or directory" error if the path pointed to by the symlink
does not exist. This shouldn't prevent `ls` from succeeding, so we
ignore errors.
Also, switching to use of `symlink_metadata` means we get stat info on
the symlink itself, not what it points to. This means `ls` will now
include broken symlinks in its listing.
* Moves off of draining between filters. Instead, the sink will pull on the stream, and will drain element-wise. This moves the whole stream to being lazy.
* Adds ctrl-c support and connects it into some of the key points where we pull on the stream. If a ctrl-c is detect, we immediately halt pulling on the stream and return to the prompt.
* Moves away from having a SourceMap where anchor locations are stored. Now AnchorLocation is kept directly in the Tag.
* To make this possible, split tag and span. Span is largely used in the parser and is copyable. Tag is now no longer copyable.
This commit adds the ability to work on features behind a feature flag
that won't be included in normal builds of nu.
These features are not exposed as Cargo features, as they reflect
incomplete features that are not yet stable.
To create a feature, add it to `features.toml`:
```toml
[hintsv1]
description = "Adding hints based on error states in the highlighter"
enabled = false
```
Each feature in `features.toml` becomes a feature flag accessible to `cfg`:
```rs
println!("hintsv1 is enabled");
```
By default, features are enabled based on the value of the `enabled` field.
You can also enable a feature from the command line via the
`NUSHELL_ENABLE_FLAGS` environment variable:
```sh
$ NUSHELL_ENABLE_FLAGS=hintsv1 cargo run
```
You can enable all flags via `NUSHELL_ENABLE_ALL_FLAGS`.
This commit also updates the CI setup to run the build with all flags off and
with all flags on. It also extracts the linting test into its own
parallelizable test, which means it doesn't need to run together with every
other test anymore.
When working on a feature, you should also add tests behind the same flag. A
commit is mergable if all tests pass with and without the flag, allowing
incomplete commits to land on master as long as the incomplete code builds and
passes tests.
This commit replaces the previous naive coloring system with a coloring
system that is more aligned with the parser.
The main benefit of this change is that it allows us to use parsing
rules to decide how to color tokens.
For example, consider the following syntax:
```
$ ps | where cpu > 10
```
Ideally, we could color `cpu` like a column name and not a string,
because `cpu > 10` is a shorthand block syntax that expands to
`{ $it.cpu > 10 }`.
The way that we know that it's a shorthand block is that the `where`
command declares that its first parameter is a `SyntaxShape::Block`,
which allows the shorthand block form.
In order to accomplish this, we need to color the tokens in a way that
corresponds to their expanded semantics, which means that high-fidelity
coloring requires expansion.
This commit adds a `ColorSyntax` trait that corresponds to the
`ExpandExpression` trait. The semantics are fairly similar, with a few
differences.
First `ExpandExpression` consumes N tokens and returns a single
`hir::Expression`. `ColorSyntax` consumes N tokens and writes M
`FlatShape` tokens to the output.
Concretely, for syntax like `[1 2 3]`
- `ExpandExpression` takes a single token node and produces a single
`hir::Expression`
- `ColorSyntax` takes the same token node and emits 7 `FlatShape`s
(open delimiter, int, whitespace, int, whitespace, int, close
delimiter)
Second, `ColorSyntax` is more willing to plow through failures than
`ExpandExpression`.
In particular, consider syntax like
```
$ ps | where cpu >
```
In this case
- `ExpandExpression` will see that the `where` command is expecting a
block, see that it's not a literal block and try to parse it as a
shorthand block. It will successfully find a member followed by an
infix operator, but not a following expression. That means that the
entire pipeline part fails to parse and is a syntax error.
- `ColorSyntax` will also try to parse it as a shorthand block and
ultimately fail, but it will fall back to "backoff coloring mode",
which parsing any unidentified tokens in an unfallible, simple way. In
this case, `cpu` will color as a string and `>` will color as an
operator.
Finally, it's very important that coloring a pipeline infallibly colors
the entire string, doesn't fail, and doesn't get stuck in an infinite
loop.
In order to accomplish this, this PR separates `ColorSyntax`, which is
infallible from `FallibleColorSyntax`, which might fail. This allows the
type system to let us know if our coloring rules bottom out at at an
infallible rule.
It's not perfect: it's still possible for the coloring process to get
stuck or consume tokens non-atomically. I intend to reduce the
opportunity for those problems in a future commit. In the meantime, the
current system catches a number of mistakes (like trying to use a
fallible coloring rule in a loop without thinking about the possibility
that it will never terminate).
The main thrust of this (very large) commit is an overhaul of the
expansion system.
The parsing pipeline is:
- Lightly parse the source file for atoms, basic delimiters and pipeline
structure into a token tree
- Expand the token tree into a HIR (high-level intermediate
representation) based upon the baseline syntax rules for expressions
and the syntactic shape of commands.
Somewhat non-traditionally, nu doesn't have an AST at all. It goes
directly from the token tree, which doesn't represent many important
distinctions (like the difference between `hello` and `5KB`) directly
into a high-level representation that doesn't have a direct
correspondence to the source code.
At a high level, nu commands work like macros, in the sense that the
syntactic shape of the invocation of a command depends on the
definition of a command.
However, commands do not have the ability to perform unrestricted
expansions of the token tree. Instead, they describe their arguments in
terms of syntactic shapes, and the expander expands the token tree into
HIR based upon that definition.
For example, the `where` command says that it takes a block as its first
required argument, and the description of the block syntactic shape
expands the syntax `cpu > 10` into HIR that represents
`{ $it.cpu > 10 }`.
This commit overhauls that system so that the syntactic shapes are
described in terms of a few new traits (`ExpandSyntax` and
`ExpandExpression` are the primary ones) that are more composable than
the previous system.
The first big win of this new system is the addition of the `ColumnPath`
shape, which looks like `cpu."max ghz"` or `package.version`.
Previously, while a variable path could look like `$it.cpu."max ghz"`,
the tail of a variable path could not be easily reused in other
contexts. Now, that tail is its own syntactic shape, and it can be used
as part of a command's signature.
This cleans up commands like `inc`, `add` and `edit` as well as
shorthand blocks, which can now look like `| where cpu."max ghz" > 10`
This resolves a small integration issue that would make custom prompts problematic (if they are implemented). The approach was to use the highlighter implementation in Helper to insert colour codes to the prompt however it heavily relies on the prompt being in a specific format, ending with a `> ` sequence. However, this should really be the job of the prompt itself not the presentation layer.
For now, I've simply stripped off the additional `> ` characters and passed in just the prompt itself without slicing off the last two characters. I moved the `\x1b[m` control sequence to the prompt creation in `cli.rs` as this feels like the more logical home for controlling what the prompt looks like. I can think of better ways to do this in future but this should be a fine solution for now.
In future it would probably make sense to completely separate prompts (be it, internal or external) from this code so it can be configured as an isolated piece of code.
Kind of touches on #356 by integrating the Starship prompt directly into the shell.
Not finished yet and has surfaced a potential bug in rustyline anyway. It depends on https://github.com/starship/starship/pull/509 being merged so the Starship prompt can be used as a library.
I could have tackled #356 completely and implemented a full custom prompt feature but I felt this was a simpler approach given that Starship is both written in Rust so shelling out isn't necessary and it already has a bunch of useful features built in.
However, I would understand if it would be preferable to just scrap integrating Starship directly and instead implement a custom prompt system which would facilitate simply shelling out to Starship.
If possible matches are not found then check if the passed in `obj`
parameter is a `string` or a `path`, if so then return it. I am not
sure this is the right fix, but I figured I would make an attempt and
get a conversation started about it.
When the last command has an input value larger than the data its
operating on it would crash. Added a check to ensure there are enough
elements to take.
This feature allows a user to set `ctrlc_exit` to `true` or `false` in their config to override how multiple CTRL-C invocations are handled. Without this change pressing CTRL-C multiple times will exit nu. With this change applied the user can configure the behavior to behave like other shells where multiple invocations will essentially clear the line.
This fixes#457.
Bare words now represent literal file names, and globs are a different
syntax shape called "Pattern". This allows commands like `cp` to ask for
a pattern as a source and a literal file as a target.
This also means that attempting to pass a glob to a command that expects
a literal path will produce an error.
Previously, there was a single parsing rule for "bare words" that
applied to both internal and external commands.
This meant that, because `cargo +nightly` needed to work, we needed to
add `+` as a valid character in bare words.
The number of characters continued to grow, and the situation was
becoming untenable. The current strategy would eventually eat up all
syntax and make it impossible to add syntax like `@foo` to internal
commands.
This patch significantly restricts bare words and introduces a new token
type (`ExternalWord`). An `ExternalWord` expands to an error in the
internal syntax, but expands to a bare word in the external syntax.
`ExternalWords` are highlighted in grey in the shell.
Fixes#627
Fixes a regression caused by #579, specifically commit cc8872b4ee .
The code was intended to perform a comparison between the wanted
output type and "Tagged<Value>" in order to be able to provide a
special-cased path for Tagged<Value>. When I wrote the code, I
used "name" as a variable name and only later realized that it
shadowed the "name" param to the function, so I renamed it to
type_name, but forgot to change the comparison.
This broke the special-casing, as the name param only contains
the name of the struct without generics (like "Tagged"), while
`std::any::type_name` (in the current implementation) contains
the full paths of the struct including all generic params
(like "nu::object::meta::Tagged<nu::object::base::Value>").
At the moment the pipeline parser does not enforce
that there must be a pipe between each part of the pipeline,
which can lead to confusing behaviour or misleading errors.
This commit migrates Value's numeric types to BigInt and BigDecimal. The
basic idea is that overflow errors aren't great in a shell environment,
and not really necessary.
The main immediate consequence is that new errors can occur when
serializing Nu values to other formats. You can see this in changes to
the various serialization formats (JSON, TOML, etc.). There's a new
`CoerceInto` trait that uses the `ToPrimitive` trait from `num_traits`
to attempt to coerce a `BigNum` or `BigDecimal` into a target type, and
produces a `RangeError` (kind of `ShellError`) if the coercion fails.
Another possible future consequence is that certain performance-critical
numeric operations might be too slow. If that happens, we can introduce
specialized numeric types to help improve the performance of those
situations, based on the real-world experience.
This commit migrates Value's numeric types to BigInt and BigDecimal. The
basic idea is that overflow errors aren't great in a shell environment,
and not really necessary.
The main immediate consequence is that new errors can occur when
serializing Nu values to other formats. You can see this in changes to
the various serialization formats (JSON, TOML, etc.). There's a new
`CoerceInto` trait that uses the `ToPrimitive` trait from `num_traits`
to attempt to coerce a `BigNum` or `BigDecimal` into a target type, and
produces a `RangeError` (kind of `ShellError`) if the coercion fails.
Another possible future consequence is that certain performance-critical
numeric operations might be too slow. If that happens, we can introduce
specialized numeric types to help improve the performance of those
situations, based on the real-world experience.
with the `help` command to explore and list all commands available.
Enter will also try to see if the location to be entered is an existing
Nu command, if it is it will let you inspect the command under `help`.
This provides baseline needed so we can iterate on it.
This commit is more substantial than it looks: there was basically no
real support for decimals before, and that impacted values all the way
through.
I also made Size contain a decimal instead of an integer (`1.6kb` is a
reasonable thing to type), which impacted a bunch of code.
The biggest impact of this commit is that it creates many more possible
ways for valid nu types to fail to serialize as toml, json, etc. which
typically can't support the full range of Decimal (or Bigint, which I
also think we should support). This commit makes to-toml fallible, and a
similar effort is necessary for the rest of the serializations.
We also need to figure out how to clearly communicate to users what has
happened, but failing to serialize to toml seems clearly superior to me
than weird errors in basic math operations.