* Add option to invert match command selection
* Fix rustfmt error
* Rename match --exclude to --invert
To be more descriptive and conform to e.g. grep or ripgrep -v flag.
Also simplified the --invert flag description.
* Fix formatting when description got shorter
Co-authored-by: Jakub Žádník <jakub.zadnik@tuni.fi>
* This adds table paging, relying on minus to perform the paging functionality
This is gated behind the table-pager feature
* fix problem with long running InputStreams blocking table() returning
* some comments regarding Arc clones, and callback from minus
* fix case where parent_name was {nu, term} and possibly others in the future by doing an extra test first to see if if the *parent_name key actually exists in cmap
* update with help generate_docs testing
`drop` is used for removing the last row. Passing a number allows dropping N rows.
Here we introduce the same logic for dropping columns instead.
You can certainly remove columns by using `reject`, however, there could be cases
where we are interested in removing columns from tables that contain, say, a big
number of columns. Using `reject` becomes impractical, especially when you don't
care about the column names that could either be known or not known when exploring
tables.
```
> echo [[lib, extension]; [nu-core, rs] [rake, rb]]
─────────┬───────────
lib │ extension
─────────┼───────────
nu-core │ rs
rake │ rb
─────────┴───────────
```
```
> echo [[lib, extension]; [nu-core, rs] [rake, rb]] | drop column
─────────
lib
─────────
nu-core
rake
─────────
```
There are many use cases. Here we introduce the following:
- The rows can be rolled `... | roll` (up) or `... | roll down`
- Columns can be rolled too (the default is on the `left`, you can pass `... | roll column --opposite` to roll in the other direction)
- You can `roll` the cells of a table and keeping the header names in the same order (`... | roll column --cells-only`)
- Above examples can also be passed (Ex. `... | roll down 3`) a number to tell how many places to roll.
Basic working example with rolling columns:
```
> echo '00000100'
| split chars
| each { str to-int }
| rotate counter-clockwise _
| reject _
| rename bit1 bit2 bit3 bit4 bit5 bit6 bit7 bit8
───┬──────┬──────┬──────┬──────┬──────┬──────┬──────┬──────
# │ bit1 │ bit2 │ bit3 │ bit4 │ bit5 │ bit6 │ bit7 │ bit8
───┼──────┼──────┼──────┼──────┼──────┼──────┼──────┼──────
0 │ 0 │ 0 │ 0 │ 0 │ 0 │ 1 │ 0 │ 0
───┴──────┴──────┴──────┴──────┴──────┴──────┴──────┴──────
```
We want to "shift" three bits to the left of the bitstring (four in decimal), let's try it:
```
> echo '00000100'
| split chars
| each { str to-int }
| rotate counter-clockwise _
| reject _
| rename bit1 bit2 bit3 bit4 bit5 bit6 bit7 bit8
| roll column 3
───┬──────┬──────┬──────┬──────┬──────┬──────┬──────┬──────
# │ bit4 │ bit5 │ bit6 │ bit7 │ bit8 │ bit1 │ bit2 │ bit3
───┼──────┼──────┼──────┼──────┼──────┼──────┼──────┼──────
0 │ 0 │ 0 │ 1 │ 0 │ 0 │ 0 │ 0 │ 0
───┴──────┴──────┴──────┴──────┴──────┴──────┴──────┴──────
```
The tables was rolled correctly (32 in decimal, for above bitstring). However, the *last three header names* look confusing.
We can roll the cell contents only to fix it.
```
> echo '00000100'
| split chars
| each { str to-int }
| rotate counter-clockwise _
| reject _
| rename bit1 bit2 bit3 bit4 bit5 bit6 bit7 bit8
| roll column 3 --cells-only
───┬──────┬──────┬──────┬──────┬──────┬──────┬──────┬──────
# │ bit1 │ bit2 │ bit3 │ bit4 │ bit5 │ bit6 │ bit7 │ bit8
───┼──────┼──────┼──────┼──────┼──────┼──────┼──────┼──────
0 │ 0 │ 0 │ 1 │ 0 │ 0 │ 0 │ 0 │ 0
───┴──────┴──────┴──────┴──────┴──────┴──────┴──────┴──────
```
There we go. Let's compute it's decimal value now (should be 32)
```
> echo '00000100'
| split chars
| each { str to-int }
| rotate counter-clockwise _
| reject _
| roll column 3 --cells-only
| pivot bit --ignore-titles
| get bit
| reverse
| each --numbered { = $it.item * (2 ** $it.index) }
| math sum
32
```
* remove parking_lot crate from nu-data as it is no longer being used
* remove commented out code from parse.rs
* remove commented out code from scope.rs
The autoenv logic mutates environment variables in the running session as
it operates and decides what to do for trusted directories containing `.nu-env`
files. Few of the ways to interact with it were all in a single test function.
We separate out all the ways that were done in the single test function to document
it better. This will greatly help once we start refactoring our way out from setting
environment variables this way to just setting them to `Scope`.
This is part of an on-going effort to keep variables (`PATH` and `ENV`)
in our `Scope` and rely on it for everything related to variables.
We expect to move away from setting (`std::*`) envrironment variables in the current
running process. This is non-trivial since we need to handle cases from vars
coming in from the outside world, prioritize, and also compare to the ones
we have both stored in memory and in configuration files.
Also to send out our in-memory (in `Scope`) variables properly to external
programs once we no longer rely on `std::env` vars from the running process.
* Use expand_path to handle the path including tilda
* Publish path::expand_path for using in nu-command
* cargo fmt
Co-authored-by: Wataru Yamaguchi <nagisamark2@gmail.com>
* Move tests into own file
* Move data structs to own file
* Move functions parsing 1 Token (primitives) into own file
* Rename param_flag_list to signature
* Add tests
* Fix clippy lint
* Change imports to new lexer structure
Before, ps would not insert a value if the process didn't have a parent.
Now, ps will insert an empty cell. This caused broken tables as some
rows didn't have all the columns.
* Document the lexer and lightly improve its names
The bulk of this pull request adds a substantial amount of new inline
documentation for the lexer. Along the way, I made a few minor changes
to the names in the lexer, most of which were internal.
The main change that affects other files is renaming `group` to `block`,
since the function is actually parsing a block (a list of groups).
* Further clean up the lexer
- Consolidate the logic of the various token builders into a single type
- Improve and clean up the event-driven BlockParser
- Clean up comment parsing. Comments now contain their original leading
whitespace as well as trailing whitespace, and know how to move some
leading whitespace back into the body based on how the lexer decides
to dedent the comments. This preserves the original whitespace
information while still making it straight-forward to eliminate leading
whitespace in help comments.
* Update meta.rs
* WIP
* fix clippy
* remove unwraps
* remove unwraps
Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com>
Co-authored-by: Jonathan Turner <jonathan.d.turner@gmail.com>