* query command with json, web, xml
* query xml now working
* clippy
* comment out web tests
* Initial work on query web
For now we can query everything except tables
* Support for querying tables
Now we can query multiple tables just like before, now the only thing
missing is the test coverage
* Revert "Query plugin"
* augment `ps -l` on windows to display more info
Co-authored-by: Luccas Mateus de Medeiros Gomes <luccasmmg@gmail.com>
* query command with json, web, xml
* query xml now working
* clippy
* comment out web tests
* Initial work on query web
For now we can query everything except tables
* Support for querying tables
Now we can query multiple tables just like before, now the only thing
missing is the test coverage
* finish off
* comment out web test
Co-authored-by: Luccas Mateus de Medeiros Gomes <luccasmmg@gmail.com>
* Switch to short-names when the path is a relative_path (a dir) and exit with an error if the path does not exist
* Remove debugging print line
* Show relative filenames... It does not work yet for ls ../
* Try something else to fix relative paths... it works, but the ../ code part is not very pretty
* Add canonicalize check and remove code clones
* Fix the canonicalize_with issue pointed out by kubouch. Not sure the prefix_str is what kubouch suggested
* Fix the canonicalize_with issue pointed out by kubouch. Not sure the prefix_str is what kubouch suggested
* Add single-dot expansion to nu-path
* Move value path expansion from parser to eval
Fixes#745
* Remove single dot expansion from parser
It is not necessary since it will get expanded anyway in the eval.
* Fix ls to display globs with relative paths
* Use pathdiff crate to get relative paths for ls
Co-authored-by: Stefan Stanciulescu <contact@stefanstanciulescu.com>
* Port fetch to engine-q
* Fix check for path as a string
* Add a timeout flag and fix some span issues
* Add a temporary fetch command that returns byte streams. Got rid of async stuff as we're using the blocking feature of tokio
* More tweaks for the bytestream
* Rewrite fetch using ByteStreams
* buffer read on bytes directly
Co-authored-by: Stefan Stanciulescu <contact@stefanstanciulescu.com>
I removed the Inflector dependency in favor of heck for two reasons:
- to close#3674.
- heck seems simpler and actively maintained
We could probably alter the structure of the `str_` module to expose the
individual casing behaviors better.
I did not feel as confident on changing those signatures.
So I took a lazier approach of a macro in the `mod.rs` that creates the public
shimming function to heck's traits.
* MathEval Variance and Stddev
* Fix tests and linting
* Typo
* Deal with streams when they are not tables
* First draft of these commands
* To MD
* To md and to html
* Fixed cargo and to_md
* `into_abbreviated_string` instead of `into_string`
* Changed how inner tables are displayed
* add icons to grid output. still needs cleanup
* working but adds a dependency on ansi_term - need to fix that
* update styling, added lots of green code to icons
* clippy
* add config point for grid icons
* option to replace command same name
* moved order of custom value declarations
* arranged dataframe folders and objects
* sort help commands by name
* added dtypes function for debugging
* corrected name for dataframe commands
* command names using function
* MathEval Variance and Stddev
* Fix tests and linting
* Typo
* Deal with streams when they are not tables
* `from toml` command
* From ods
* From XLSX
* From ics
* From ini
* From vcf
* Forgot a eprintln!
* custom value trait
* functions for custom value trait
* custom trait behind flag
* open dataframe command
* command to-df for basic types
* follow path for dataframe
* dataframe operations
* dataframe not default feature
* custom as default feature
* corrected examples in command
* MathEval Variance and Stddev
* Fix tests and linting
* Typo
* Deal with streams when they are not tables
* `from toml` command
* From ods
* From XLSX
* MathEval Variance and Stddev
* Fix tests and linting
* Typo
* Deal with streams when they are not tables
* FromEml and FromUrl
Added tests for from eml
* MathEval Variance and Stddev
* Fix tests and linting
* Typo
* Deal with streams when they are not tables
* `from yaml` and `from yml`
`from yaml` and `from yml`
from yaml and from yml
* Fix collect_string
* Fix tests and linting
* compiles on nightly now. (breaking change)
* less deps
* Switch over to new resolver
(it's been stable for a while.)
* let's leave num-format for another PR
* Resolve rebase artifacts
* Remove leftover dependencies on removed feature
* Remove unnecessary 'pub'
* Start taking notes and fooling around
* Split canonicalize to two versions; Add TODOs
One that takes `relative_to` and one that doesn't.
More TODO notes.
* Merge absolutize to and rename resolve_dots
* Add custom absolutize fn and use it in path expand
* Convert a couple of dunce::canonicalize to ours
* Update nu-path description
* Replace all canonicalize with nu-path version
* Remove leftover dunce dependencies
* Fix broken autocd with trailing slash
Trailing slash is preserved *only* in paths that do not contain "." or
"..". This should be fixed in the future to cover all paths but for now
it at least covers basic cases.
* Use dunce::canonicalize for canonicalizing
* Alow cd recovery from non-existent cwd
* Disable removed canonicalize functionality tests
Remove unused import
* Break down nu-path into separate modules
* Remove unused public imports
* Remove abundant cow mapping
* Fix clippy warning
* Reformulate old canonicalize tests to expand_path
They wouldn't work with the new canonicalize.
* Canonicalize also ~ and ndots; Unify path joining
Also, add doc comments in nu_path::expansions.
* Add comment
* Avoid expanding ndots if path is not valid UTF-8
With this change, no lossy path->string conversion should happen in the
nu-path crate.
* Fmt
* Slight expand_tilde refactor; Add doc comments
* Start nu-path integration tests
* Add tests TODO
* Fix docstring typo
* Fix some doc strings
* Add README for nu-path crate
* Add a couple of canonicalize tests
* Add nu-path integration tests
* Add trim trailing slashes tests
* Update nu-path dependency
* Remove unused import
* Regenerate lockfile
* chore: Replace surf with reqwest
Removes a lot of older, duplication versions of some dependencies
(roughtly 90 dependencies removed in total)
* chore: Remove syn 0.11
* chore: Remove unnecessary features from ptree
Removes some more duplicate dependencies
* cargo update
* Ensure we run the fetch and post plugins on the tokio runtime
* Fix clippy warning
* fix: Github requires a user agent on requests
Co-authored-by: Darren Schroeder <343840+fdncred@users.noreply.github.com>
Given we can write nu scripts. As the codebase grows, splitting into many smaller nu scripts is necessary.
In general, when we work with paths and files we seem to face quite a few difficulties. Here we just tackle one of them and it involves sourcing
files that also source other nu files and so forth. The current working directory becomes important here and being on a different directory
when sourcing scripts will not work. Mostly because we expand the path on the current working directory and parse the files when a source command
call is done.
For the moment, we introduce a `lib_dirs` configuration value and, unfortunately, introduce a new dependency in `nu-parser` (`nu-data`) to get
a handle of the configuration file to retrieve it. This should give clues and ideas as the new parser engine continues (introduce a way to also know paths)
With this PR we can do the following:
Let's assume we want to write a nu library called `my_library`. We will have the code in a directory called `project`: The file structure will looks like this:
```
project/my_library.nu
project/my_library/hello.nu
project/my_library/name.nu
```
This "pattern" works well, that is, when creating a library have a directory named `my_library` and next to it a `my_library.nu` file. Filling them like this:
```
source my_library/hello.nu
source my_library/name.nu
```
```
def hello [] {
"hello world"
}
```
```
def name [] {
"Nu"
end
```
Assuming this `project` directory is stored at `/path/to/lib/project`, we can do:
```
config set lib_dirs ['path/to/lib/project']
```
Given we have this `lib_dirs` configuration value, we can be anywhere while using Nu and do the following:
```
source my_library.nu
echo (hello) (name)
```
The nu-serde crate allows us to become much more generic with respect to how we
convert output to `nu-protocol::Value`s. This allows us to remove a lot of the
special-case code that we wrote for deserializing JSON values.
Co-authored-by: Darren Schroeder <343840+fdncred@users.noreply.github.com>
* Add the nu-serde crate
nu-serde is a crate that can be used to turn a value implementing
`serde::Serialize` into a `nu-protocol::Value`. This has the potential to
significantly simplify plugin authorship.
This crate was the previously independent
[serde-nu](https://github.com/lily-mara/serde-nu) but the nushell maintainers
expressed an interest in having it added to the mainline nushell repository.
* fixup! Add the nu-serde crate
* nuframe in its own type in UntaggedValue
* Removed eager dataframe from enum
* Dataframe created from list of values
* Corrected order in dataframe columns
* Returned tag from stream collection
* Removed series from dataframe commands
* Arithmetic operators
* forced push
* forced push
* Replace all command
* String commands
* appending operations with dfs
* Testing suite for dataframes
* Unit test for dataframe commands
* improved equality for dataframes
* moving all dataframe operations to protocol
* objects in dataframes
* Removed cloning when converting to row
This brings the features used by the `nu_plugin_fetch` and
`nu_plugin_post` in line and drops the default-features, reducing
the number of pulled-in dependencies and avoiding a second round of
compilations.
Retry of #3777 but with different features, post and fetch plugin tested
Signed-off-by: Daniel Egger <daniel@eggers-club.de>
* nuframe in its own type in UntaggedValue
* Removed eager dataframe from enum
* Dataframe created from list of values
* Corrected order in dataframe columns
* Returned tag from stream collection
* Removed series from dataframe commands
* Arithmetic operators
* forced push
* forced push
* Replace all command
* String commands
* appending operations with dfs
* Testing suite for dataframes
* Unit test for dataframe commands
* improved equality for dataframes
* moving all dataframe operations to protocol