* MathEval Variance and Stddev
* Fix tests and linting
* Typo
* Deal with streams when they are not tables
* `from toml` command
* From ods
* From XLSX
* From ics
* From ini
* From vcf
* Forgot a eprintln!
* custom value trait
* functions for custom value trait
* custom trait behind flag
* open dataframe command
* command to-df for basic types
* follow path for dataframe
* dataframe operations
* dataframe not default feature
* custom as default feature
* corrected examples in command
* MathEval Variance and Stddev
* Fix tests and linting
* Typo
* Deal with streams when they are not tables
* `from toml` command
* From ods
* From XLSX
* Add 'expor env' dummy command
* (WIP) Abstract away module exportables as Overlay
* Switch to Overlays for use/hide
Works for decls only right now.
* Fix passing import patterns of hide to eval
* Simplify use/hide of decls
* Add ImportPattern as Expr; Add use env eval
Still no parsing of "export env" so I can't test it yet.
* Refactor export parsing; Add InternalError
* Add env var export and activation; Misc changes
Now it is possible to `use` env var that was exported from a module.
This commit also adds some new errors and other small changes.
* Add env var hiding
* Fix eval not recognizing hidden decls
Without this change, calling `hide foo`, the evaluator does not know
whether a custom command named "foo" was hidden during parsing,
therefore, it is not possible to reliably throw an error about the "foo"
name not found.
* Add use/hide/export env var tests; Cleanup; Notes
* Ignore hide env related tests for now
* Fix main branch merge mess
* Fixed multi-word export def
* Fix hiding tests on Windows
* Remove env var hiding for now
* MathEval Variance and Stddev
* Fix tests and linting
* Typo
* Deal with streams when they are not tables
* FromEml and FromUrl
Added tests for from eml
* MathEval Variance and Stddev
* Fix tests and linting
* Typo
* Deal with streams when they are not tables
* `from yaml` and `from yml`
`from yaml` and `from yml`
from yaml and from yml
* Fix collect_string
* Fix tests and linting
* Port str contains command
* Add another test case / example for str contains
* Port str downcase to engine-q
Co-authored-by: Stefan Stanciulescu <contact@stefanstanciulescu.com>
* initial commit of reverse
* reverse is working, now move on to the examples
* add in working examples for reverse
* #[allow(clippy::needless_collect)]
* Port camel case and kebab case
* Port pascal case
* Port snake case and screaming snake case
* Cleanup before PR
* Add back cell path support for str case commands
* Add cell path tests for str case command
* Revert "Add cell path tests for str case command"
This reverts commit a0906318d95fd2b5e4f8ca42f547a7e4c5db381a.
* Add cell path test cases for str case command
* Move cell path tests from tests.rs to Examples in each of the command's file
Co-authored-by: Stefan Stanciulescu <contact@stefanstanciulescu.com>
Very often we need to work with tables (say extracted from unstructured data or some
kind of final report, timeseries, and the like).
It's inevitable we will be having columns that we can't know beforehand what their names
will be, or how many.
Also, we may end up with certain cells having values we may want to remove as we explore.
Here, `update cells` fundamentally goes over every cell in the table coming in and updates
the cell's contents with the output of the block passed. Basic example here:
```
> [
[ ty1, t2, ty];
[ 1, a, $nothing]
[(wrap), (0..<10), 1Mb]
[ 1s, ({}), 1000000]
[ $true, $false, ([[]])]
] | update cells { describe }
───┬───────────────────────┬───────────────────────────┬──────────
# │ ty1 │ t2 │ ty
───┼───────────────────────┼───────────────────────────┼──────────
0 │ integer │ string │ nothing
1 │ row Column(table of ) │ range[[integer, integer)] │ filesize
2 │ string │ nothing │ integer
3 │ boolean │ boolean │ table of
───┴───────────────────────┴───────────────────────────┴──────────
```
and another one (in the examples) for cases, say we have a timeseries table generated and
we want to remove the zeros and have empty strings and save it out to something like CSV.
```
> [
[2021-04-16, 2021-06-10, 2021-09-18, 2021-10-15, 2021-11-16, 2021-11-17, 2021-11-18];
[ 37, 0, 0, 0, 37, 0, 0]
] | update cells {|value| i
if ($value | into int) == 0 {
""
} {
$value
}
}
───┬────────────┬────────────┬────────────┬────────────┬────────────┬────────────┬────────────
# │ 2021-04-16 │ 2021-06-10 │ 2021-09-18 │ 2021-10-15 │ 2021-11-16 │ 2021-11-17 │ 2021-11-18
───┼────────────┼────────────┼────────────┼────────────┼────────────┼────────────┼────────────
0 │ 37 │ │ │ │ 37 │ │
───┴────────────┴────────────┴────────────┴────────────┴────────────┴────────────┴────────────
```
```
> [
[ msg, labels, span];
["The message", "Helpful message here", ([[start, end]; [0, 141]])]
] | error make
error: The message
┌─ shell:1:1
│
1 │ ╭ [
2 │ │ [ msg, labels, span];
3 │ │ ["The message", "Helpful message here", ([[start, end]; [0, 141]])]
│ ╰─────────────────────────────────────────────────────────────────────^ Helpful message here
```
Adding a more flexible approach for creating error values. One use case, for instance is the
idea of a test framework. A failed assertion instead of printing to the screen it could create
tables with more details of the failed assertion and pass it to this command for making a full
fledge error that Nu can show. This can (and should) be extended for capturing error values as well
in the pipeline. One could also use it for inspection.
For example: `.... | error inspect { # inspection here }`
or "error handling" as well, like so: `.... | error capture { fix here }`
However, we start here only with `error make` that creates an error value for you with limited support for the time being.
* Add subcommand `into filesize`
It's currently not possible to convert a number or a string containing a number
into a filesize. The only way to create an instance of filesize type today is
with a literal in nushell syntax. This commit adds the `into filesize`
subcommand so that file sizes can be created from the outputs of programs
producing numbers or strings, like standard unix tools.
There is a limitation with this - it doesn't currently parse values like `10 MB`
or `10 MiB`, it can only look at the number itself. If the desire is there, more
flexible parsing can be added.
* fixup! Add subcommand `into filesize`
* fixup! Add subcommand `into filesize`
We introduce it here and allow it to work with regular lists (tables with no columns) as well as symmetric tables. Say we have two lists and wish to zip them, like so:
```
[0 2 4 6 8] | zip {
[1 3 5 7 9]
} | flatten
───┬───
0 │ 0
1 │ 1
2 │ 2
3 │ 3
4 │ 4
5 │ 5
6 │ 6
7 │ 7
8 │ 8
9 │ 9
───┴───
```
In the case for two tables instead:
```
[[symbol]; ['('] ['['] ['{']] | zip {
[[symbol]; [')'] [']'] ['}']]
} | each {
get symbol | $'($in.0)nushell($in.1)'
}
───┬───────────
0 │ (nushell)
1 │ [nushell]
2 │ {nushell}
───┴───────────
```
We very well support `nth 0 2 3 --skip 1 4` to select particular rows and skip some using a flag. However, in practice we deal with tables (whether they come from parsing or loading files and whatnot) where we don't know the size of the table up front (and everytime we have these, they may have different sizes). There are also other use cases when we use intermediate tables during processing and wish to always drop certain rows and **keep the rest**.
Usage:
```
... | drop nth 0
... | drop nth 3 8
```
* Refactor Hash code to simplify md5 and sha256 implementations
Md5 and Sha256 (and other future digests) require less boilerplate code
now. Error reporting includues the name of the hash again.
* Add missing hash sha256 test
Hashers now uses on Rust Crypto Digest trait which makes it trivial to
implement additional hash functions.
The original `md5` crate does not implement the Digest trait and was
replaced by `md-5` crate which does. Sha256 uses already included `sha2`
crate.
* nuframe in its own type in UntaggedValue
* Removed eager dataframe from enum
* Dataframe created from list of values
* Corrected order in dataframe columns
* Returned tag from stream collection
* Removed series from dataframe commands
* Arithmetic operators
* forced push
* forced push
* Replace all command
* String commands
* appending operations with dfs
* Testing suite for dataframes
* Unit test for dataframe commands
* improved equality for dataframes
This allows converting strings to filepaths without having to use
`path expand` roundtrip.
Filepaths are taken as-is without any validation/conversion.
* Type in command description
* filter name change
* Clean column name
* Clippy error and updated polars version
* Lint correction in file
* CSV Infer schema optional
* Correct float operations
* changes in series castings to allow other types
* Clippy error correction
* Removed lists from command signatures
* Added not command for series
* take command with args
* set with idx command
* Type in command description
* filter name change
* Clean column name
* Clippy error and updated polars version
* Lint correction in file
* CSV Infer schema optional
* Correct float operations
* changes in series castings to allow other types
* Clippy error correction
* Removed lists from command signatures
* Added not command for series
* Add paste command
* fix build and format failures
* Add examples
* Make tests pass
* Format
* add cfg annotation for Clip
* format code
* remove additional import for clip
* Remove test