* get_columns is working in the columns command
* the new location of the get_columns method is nu-protocol/src/column.rs
* reference the new location of the get_columns method
* move get_columns to nu-engine
* feat(into): add example of into-bool
* feat(into): add convert from int and float
* feat(into): add converting string to bool
* feat(into): add converting value in table
* fix(into): update error
* fix(into): update span for example
* chore(into): update signature description
* float comparison using epsilon
* Update bool.rs
Co-authored-by: JT <547158+jntrnr@users.noreply.github.com>
* A first working version of flatten. Needs a lot of cleanup. Committing to have a working version
* Typo fix
* Flatten tests pass
* Final cleanup, ready for push
* Final cleanup, ready for push
* Final cleanup, ready for push
* Final cleanup, ready for push
* Update flatten.rs
Co-authored-by: JT <547158+jntrnr@users.noreply.github.com>
* Proof of concept treating env vars as Values
* Refactor env var collection and method name
* Remove unnecessary pub
* Move env translations into a new file
* Fix LS_COLORS to support any Value
* Fix spans during env var translation
* Add span to env var in cd
* Improve error diagnostics
* Fix non-string env vars failing string conversion
* Make PROMPT_COMMAND a Block instead of String
* Record host env vars to a fake file
This will give spans to env vars that would otherwise be without one.
Makes errors less confusing.
* Add 'env' command to list env vars
It will list also their values translated to strings
* Sort env command by name; Add env var type
* Remove obsolete test
* Porting 'ansi' command from nushell to engine-q
* Added StrCollect to example_test.rs to allow example tests to run
* Run 'cargo fmt' to fix formatting
* Update command.rs
* Update command.rs
* Update command.rs
* Added a category
Co-authored-by: JT <547158+jntrnr@users.noreply.github.com>
I removed the Inflector dependency in favor of heck for two reasons:
- to close#3674.
- heck seems simpler and actively maintained
We could probably alter the structure of the `str_` module to expose the
individual casing behaviors better.
I did not feel as confident on changing those signatures.
So I took a lazier approach of a macro in the `mod.rs` that creates the public
shimming function to heck's traits.
* First iteration of the version command
* Cleanup
* Fix the installed plugins bug
* Fix fmt check issue
* Fix clippy warning
* Fixing all clippy warnings
* Remove old code
* Sort default context items categorically
* Separate commands in multiple statements
* Use curly braces instead of square brackets
This prevents undesired reformatting.
* MathEval Variance and Stddev
* Fix tests and linting
* Typo
* Deal with streams when they are not tables
* First draft of these commands
* To MD
* To md and to html
* Fixed cargo and to_md
* `into_abbreviated_string` instead of `into_string`
* Changed how inner tables are displayed
* port over the nth command from nushell
* remove a line of redundant code
* must sort the rows or else if the rows are not from low to high this crashes engine-q
* Port str to-decimal to into decimal command. Add also a Value::test_float function for tests only
* Add support for handling integers into decimals and fix issues with error span
* Remember environment variables from previous scope
* Re-introduce env var hiding
Right now, hiding decls is broken
* Re-introduce hidden field of import patterns
All tests pass now.
* Remove/Address tests TODOs
* Fix test typo; Report hiding error
* Add a few more tests
* Fix wrong expected test result
* add icons to grid output. still needs cleanup
* working but adds a dependency on ansi_term - need to fix that
* update styling, added lots of green code to icons
* clippy
* add config point for grid icons
* fix#4140
We are passing commands into a shell underneath but we were not
escaping arguments correctly. This new version of the code also takes
into consideration the ";" and "&" characters, which have special
meaning in shells.
We would probably benefit from a more robust way to join arguments to
shell programs. Python's stdlib has shlex.join, and perhaps we can
take that implementation as a reference.
* clean up escaping of posix shell args
I believe the right place to do escaping of arguments was in the
spawn_sh_command function. Note that this change prevents things like:
^echo "$(ls)"
from executing the ls command. Instead, this will just print
$(ls)
The regex has been taken from the python stdlib implementation of shlex.quote
* fix non-literal parameters and single quotes
* address clippy's comments
* fixup! address clippy's comments
* test that subshell commands are sanitized properly
* option to replace command same name
* moved order of custom value declarations
* arranged dataframe folders and objects
* sort help commands by name
* added dtypes function for debugging
* corrected name for dataframe commands
* command names using function
* have save --append create file if not exists
Currently, doing:
echo a | save --raw --append file.txt
will fail if file.txt does not exist. This PR changes that
* test that `save --append` will create new file
* MathEval Variance and Stddev
* Fix tests and linting
* Typo
* Deal with streams when they are not tables
* `from toml` command
* From ods
* From XLSX
* From ics
* From ini
* From vcf
* Forgot a eprintln!
* custom value trait
* functions for custom value trait
* custom trait behind flag
* open dataframe command
* command to-df for basic types
* follow path for dataframe
* dataframe operations
* dataframe not default feature
* custom as default feature
* corrected examples in command
* MathEval Variance and Stddev
* Fix tests and linting
* Typo
* Deal with streams when they are not tables
* `from toml` command
* From ods
* From XLSX
It's no longer attached to a Block. Makes access to overlays more
streamlined by removing this one indirection. Also makes it easier to
create standalone overlays without a block which might come in handy.
* Add 'expor env' dummy command
* (WIP) Abstract away module exportables as Overlay
* Switch to Overlays for use/hide
Works for decls only right now.
* Fix passing import patterns of hide to eval
* Simplify use/hide of decls
* Add ImportPattern as Expr; Add use env eval
Still no parsing of "export env" so I can't test it yet.
* Refactor export parsing; Add InternalError
* Add env var export and activation; Misc changes
Now it is possible to `use` env var that was exported from a module.
This commit also adds some new errors and other small changes.
* Add env var hiding
* Fix eval not recognizing hidden decls
Without this change, calling `hide foo`, the evaluator does not know
whether a custom command named "foo" was hidden during parsing,
therefore, it is not possible to reliably throw an error about the "foo"
name not found.
* Add use/hide/export env var tests; Cleanup; Notes
* Ignore hide env related tests for now
* Fix main branch merge mess
* Fixed multi-word export def
* Fix hiding tests on Windows
* Remove env var hiding for now
* MathEval Variance and Stddev
* Fix tests and linting
* Typo
* Deal with streams when they are not tables
* FromEml and FromUrl
Added tests for from eml
* MathEval Variance and Stddev
* Fix tests and linting
* Typo
* Deal with streams when they are not tables
* `from yaml` and `from yml`
`from yaml` and `from yml`
from yaml and from yml
* Fix collect_string
* Fix tests and linting
* Port str contains command
* Add another test case / example for str contains
* Port str downcase to engine-q
Co-authored-by: Stefan Stanciulescu <contact@stefanstanciulescu.com>
* initial commit of reverse
* reverse is working, now move on to the examples
* add in working examples for reverse
* #[allow(clippy::needless_collect)]
* Port camel case and kebab case
* Port pascal case
* Port snake case and screaming snake case
* Cleanup before PR
* Add back cell path support for str case commands
* Add cell path tests for str case command
* Revert "Add cell path tests for str case command"
This reverts commit a0906318d95fd2b5e4f8ca42f547a7e4c5db381a.
* Add cell path test cases for str case command
* Move cell path tests from tests.rs to Examples in each of the command's file
Co-authored-by: Stefan Stanciulescu <contact@stefanstanciulescu.com>
Now, Echo converts multiple values in a ValueStream, but it simply
forwards a single Value; if no PipelineData is detected as an input, an
empty string is returned as a single Value.
Values to echo need to be extracted from the call, and then converted
into PipelineData.
I also updated the first example so that its result is a List,
as in the reference implementation.
See cc3653cfd9 for more on the `-c` flag.
Co-authored-by: Andrés N. Robalino <andres@androbtech.com>
Co-authored-by: Andrés N. Robalino <andres@androbtech.com>
* compiles on nightly now. (breaking change)
* less deps
* Switch over to new resolver
(it's been stable for a while.)
* let's leave num-format for another PR
Very often we need to work with tables (say extracted from unstructured data or some
kind of final report, timeseries, and the like).
It's inevitable we will be having columns that we can't know beforehand what their names
will be, or how many.
Also, we may end up with certain cells having values we may want to remove as we explore.
Here, `update cells` fundamentally goes over every cell in the table coming in and updates
the cell's contents with the output of the block passed. Basic example here:
```
> [
[ ty1, t2, ty];
[ 1, a, $nothing]
[(wrap), (0..<10), 1Mb]
[ 1s, ({}), 1000000]
[ $true, $false, ([[]])]
] | update cells { describe }
───┬───────────────────────┬───────────────────────────┬──────────
# │ ty1 │ t2 │ ty
───┼───────────────────────┼───────────────────────────┼──────────
0 │ integer │ string │ nothing
1 │ row Column(table of ) │ range[[integer, integer)] │ filesize
2 │ string │ nothing │ integer
3 │ boolean │ boolean │ table of
───┴───────────────────────┴───────────────────────────┴──────────
```
and another one (in the examples) for cases, say we have a timeseries table generated and
we want to remove the zeros and have empty strings and save it out to something like CSV.
```
> [
[2021-04-16, 2021-06-10, 2021-09-18, 2021-10-15, 2021-11-16, 2021-11-17, 2021-11-18];
[ 37, 0, 0, 0, 37, 0, 0]
] | update cells {|value| i
if ($value | into int) == 0 {
""
} {
$value
}
}
───┬────────────┬────────────┬────────────┬────────────┬────────────┬────────────┬────────────
# │ 2021-04-16 │ 2021-06-10 │ 2021-09-18 │ 2021-10-15 │ 2021-11-16 │ 2021-11-17 │ 2021-11-18
───┼────────────┼────────────┼────────────┼────────────┼────────────┼────────────┼────────────
0 │ 37 │ │ │ │ 37 │ │
───┴────────────┴────────────┴────────────┴────────────┴────────────┴────────────┴────────────
```
* Change path join signature
* Appending now works without flag
* Column path operation is behind a -c flag
* Move column path arg retrieval to a function
Also improves errors
* Fix path join tests
* Propagate column path changes to all path commands
* Update path command examples with columns paths
* Modernize path command examples by removing "echo"
* Improve structured path error message
* Fix typo
* Expand path when converting value -> PathBuf
Also includes Tagged<PathBuf>.
Fixes#3605
* Expand path for PATH env. variable
Fixes#1834
* Remove leftover Cows after nu-path refactor
There were some unnecessary Cow conversions leftover from the old
nu-path implementation.
* Use canonicalize in source command; Improve errors
Previously, `source` used `expand_path()` which does not follow
symlinks.
As a follow up, I improved the source error messages so they now tell
why the source file could not be canonicalized or read into string.
```
> [
[ msg, labels, span];
["The message", "Helpful message here", ([[start, end]; [0, 141]])]
] | error make
error: The message
┌─ shell:1:1
│
1 │ ╭ [
2 │ │ [ msg, labels, span];
3 │ │ ["The message", "Helpful message here", ([[start, end]; [0, 141]])]
│ ╰─────────────────────────────────────────────────────────────────────^ Helpful message here
```
Adding a more flexible approach for creating error values. One use case, for instance is the
idea of a test framework. A failed assertion instead of printing to the screen it could create
tables with more details of the failed assertion and pass it to this command for making a full
fledge error that Nu can show. This can (and should) be extended for capturing error values as well
in the pipeline. One could also use it for inspection.
For example: `.... | error inspect { # inspection here }`
or "error handling" as well, like so: `.... | error capture { fix here }`
However, we start here only with `error make` that creates an error value for you with limited support for the time being.
* Add subcommand `into filesize`
It's currently not possible to convert a number or a string containing a number
into a filesize. The only way to create an instance of filesize type today is
with a literal in nushell syntax. This commit adds the `into filesize`
subcommand so that file sizes can be created from the outputs of programs
producing numbers or strings, like standard unix tools.
There is a limitation with this - it doesn't currently parse values like `10 MB`
or `10 MiB`, it can only look at the number itself. If the desire is there, more
flexible parsing can be added.
* fixup! Add subcommand `into filesize`
* fixup! Add subcommand `into filesize`
* feat: spawn the executables directly if possible
This pull request changes nu-command so that it spawns the process directly if:
- They are a `.exe` on Windows
- They are not a `.sh` or `.bash` on not windows.
Benefits:
- As I explained in [this comment](https://github.com/nushell/nushell/issues/3898#issuecomment-894000812), this is another step towards making Nushell a standalone shell, that doesn't need to shell out unless it is running a script for a particular shell (cmd, sh, ps1, etc.).
- Fixes the bug with multiline strings
- Better performance due to direct spawning.
For example, this script shows ~20 ms less latency.
After:
```nu
C:\> benchmark { node -e 'console.log("sssss")' }
───┬──────────────────
# │ real time
───┼──────────────────
0 │ 63ms 921us 600ns
───┴──────────────────
```
Before
```nu
C:\> benchmark { node -e 'console.log("sssss")' }
───┬──────────────────
# │ real time
───┼──────────────────
0 │ 79ms 136us 800ns
───┴──────────────────
```
Fixes#3898
* fix: make which dependency optional
Also fixes clippy warnings
* refactor: refactor spawn_exe, spawn_cmd, spawn_sh, and spawn_any
* fix: use which feature instead of which-support
* fix: use which_in to use the cwd of nu
* fix: use case insensitive comparison of the extensions
Sometimes the case of the extension is uppercased by the "which_in" function
Also use unix instead of not windows. Some os might not have sh support
* Resolve rebase artifacts
* Remove leftover dependencies on removed feature
* Remove unnecessary 'pub'
* Start taking notes and fooling around
* Split canonicalize to two versions; Add TODOs
One that takes `relative_to` and one that doesn't.
More TODO notes.
* Merge absolutize to and rename resolve_dots
* Add custom absolutize fn and use it in path expand
* Convert a couple of dunce::canonicalize to ours
* Update nu-path description
* Replace all canonicalize with nu-path version
* Remove leftover dunce dependencies
* Fix broken autocd with trailing slash
Trailing slash is preserved *only* in paths that do not contain "." or
"..". This should be fixed in the future to cover all paths but for now
it at least covers basic cases.
* Use dunce::canonicalize for canonicalizing
* Alow cd recovery from non-existent cwd
* Disable removed canonicalize functionality tests
Remove unused import
* Break down nu-path into separate modules
* Remove unused public imports
* Remove abundant cow mapping
* Fix clippy warning
* Reformulate old canonicalize tests to expand_path
They wouldn't work with the new canonicalize.
* Canonicalize also ~ and ndots; Unify path joining
Also, add doc comments in nu_path::expansions.
* Add comment
* Avoid expanding ndots if path is not valid UTF-8
With this change, no lossy path->string conversion should happen in the
nu-path crate.
* Fmt
* Slight expand_tilde refactor; Add doc comments
* Start nu-path integration tests
* Add tests TODO
* Fix docstring typo
* Fix some doc strings
* Add README for nu-path crate
* Add a couple of canonicalize tests
* Add nu-path integration tests
* Add trim trailing slashes tests
* Update nu-path dependency
* Remove unused import
* Regenerate lockfile
* Allow different names for ...rest
* Resolves#3945
* This change requires an explicit name for the rest argument in `WholeStreamCommand`,
which is why there are so many changed files.
* Remove redundant clone
* Add tests
* Allow environment variables to be hidden
This change allows environment variables in Nushell to have a value of
`Nothing`, which can be set by the user by passing `$nothing` to
`let-env` and friends.
Environment variables with a value of Nothing behave as if they are not
set at all. This allows a user to shadow the value of an environment
variable in a parent scope, effectively removing it from their current
scope. This was not possible before, because a scope can not affect its
parent scopes.
This is a workaround for issues like #3920.
Additionally, this allows a user to simultaneously set, change and
remove multiple environment variables via `load-env`. Any environment
variables set to $nothing will be hidden and thus act as if they are
removed. This simplifies working with virtual environments, which rely
on setting multiple environment variables, including PATH, to specific
values, and remove/change them on deactivation.
One surprising behavior is that an environment variable set to $nothing
will act as if it is not set when querying it (via $nu.env.X), but it is
still possible to remove it entirely via `unlet-env`. If the same
environment variable is present in the parent scope, the value in the
parent scope will be visible to the user. This might be surprising
behavior to users who are not familiar with the implementation details.
An additional corner case is the the shorthand form of `with-env` does
not work with this feature. Using `X=$nothing` will set $nu.env.X to the
string "$nothing". The long-form works as expected: `with-env [X
$nothing] {...}`.
* Remove unused import
* Allow all primitives to be convert to strings
Given we can write nu scripts. As the codebase grows, splitting into many smaller nu scripts is necessary.
In general, when we work with paths and files we seem to face quite a few difficulties. Here we just tackle one of them and it involves sourcing
files that also source other nu files and so forth. The current working directory becomes important here and being on a different directory
when sourcing scripts will not work. Mostly because we expand the path on the current working directory and parse the files when a source command
call is done.
For the moment, we introduce a `lib_dirs` configuration value and, unfortunately, introduce a new dependency in `nu-parser` (`nu-data`) to get
a handle of the configuration file to retrieve it. This should give clues and ideas as the new parser engine continues (introduce a way to also know paths)
With this PR we can do the following:
Let's assume we want to write a nu library called `my_library`. We will have the code in a directory called `project`: The file structure will looks like this:
```
project/my_library.nu
project/my_library/hello.nu
project/my_library/name.nu
```
This "pattern" works well, that is, when creating a library have a directory named `my_library` and next to it a `my_library.nu` file. Filling them like this:
```
source my_library/hello.nu
source my_library/name.nu
```
```
def hello [] {
"hello world"
}
```
```
def name [] {
"Nu"
end
```
Assuming this `project` directory is stored at `/path/to/lib/project`, we can do:
```
config set lib_dirs ['path/to/lib/project']
```
Given we have this `lib_dirs` configuration value, we can be anywhere while using Nu and do the following:
```
source my_library.nu
echo (hello) (name)
```
* Allow sourcing paths with emojis
* Add source command tests for emoji paths
* Fmt
* Disable source tests on Windows with illegal paths
* Test sourcing also ASCII and single-quoted paths
We introduce it here and allow it to work with regular lists (tables with no columns) as well as symmetric tables. Say we have two lists and wish to zip them, like so:
```
[0 2 4 6 8] | zip {
[1 3 5 7 9]
} | flatten
───┬───
0 │ 0
1 │ 1
2 │ 2
3 │ 3
4 │ 4
5 │ 5
6 │ 6
7 │ 7
8 │ 8
9 │ 9
───┴───
```
In the case for two tables instead:
```
[[symbol]; ['('] ['['] ['{']] | zip {
[[symbol]; [')'] [']'] ['}']]
} | each {
get symbol | $'($in.0)nushell($in.1)'
}
───┬───────────
0 │ (nushell)
1 │ [nushell]
2 │ {nushell}
───┴───────────
```
We very well support `nth 0 2 3 --skip 1 4` to select particular rows and skip some using a flag. However, in practice we deal with tables (whether they come from parsing or loading files and whatnot) where we don't know the size of the table up front (and everytime we have these, they may have different sizes). There are also other use cases when we use intermediate tables during processing and wish to always drop certain rows and **keep the rest**.
Usage:
```
... | drop nth 0
... | drop nth 3 8
```
The nu-serde crate allows us to become much more generic with respect to how we
convert output to `nu-protocol::Value`s. This allows us to remove a lot of the
special-case code that we wrote for deserializing JSON values.
Co-authored-by: Darren Schroeder <343840+fdncred@users.noreply.github.com>
* Refactor Hash code to simplify md5 and sha256 implementations
Md5 and Sha256 (and other future digests) require less boilerplate code
now. Error reporting includues the name of the hash again.
* Add missing hash sha256 test
Select used to ignore absent values resulting in "squashing" where
columns for multiple rows could be squashed into a single row.
This fixes the problem by inserting null/nothing into absent value, thus
preserving the structure of rows. This makes sure that all selected
columns are present in each row.
* nuframe in its own type in UntaggedValue
* Removed eager dataframe from enum
* Dataframe created from list of values
* Corrected order in dataframe columns
* Returned tag from stream collection
* Removed series from dataframe commands
* Arithmetic operators
* forced push
* forced push
* Replace all command
* String commands
* appending operations with dfs
* Testing suite for dataframes
* Unit test for dataframe commands
* improved equality for dataframes
* moving all dataframe operations to protocol
Hashers now uses on Rust Crypto Digest trait which makes it trivial to
implement additional hash functions.
The original `md5` crate does not implement the Digest trait and was
replaced by `md-5` crate which does. Sha256 uses already included `sha2`
crate.
* nuframe in its own type in UntaggedValue
* Removed eager dataframe from enum
* Dataframe created from list of values
* Corrected order in dataframe columns
* Returned tag from stream collection
* Removed series from dataframe commands
* Arithmetic operators
* forced push
* forced push
* Replace all command
* String commands
* appending operations with dfs
* Testing suite for dataframes
* Unit test for dataframe commands
* improved equality for dataframes