* Add support for Arrow IPC file format
Add support for Arrow IPC file format to dataframes commands. Support
opening of Arrow IPC-format files with extension '.arrow' or '.ipc' in
the open-df command. Add a 'to arrow' command to write a dataframe to
Arrow IPC format.
* Add unit test for open-df on Arrow
* Add -t flag to open-df command
Add a `--type`/`-t` flag to the `open-df` command, to explicitly specify
the type of file being used. Allowed values are the same at the set of
allowed file extensions.
* start working on source-env
* WIP
* Get most tests working, still one to go
* Fix file-relative paths; Report parser error
* Fix merge conflicts; Restore source as deprecated
* Tests: Use source-env; Remove redundant tests
* Fmt
* Respect hidden env vars
* Fix file-relative eval for source-env
* Add file-relative eval to "overlay use"
* Use FILE_PWD only in source-env and "overlay use"
* Ignore new tests for now
This will be another issue
* Throw an error if setting FILE_PWD manually
* Fix source-related test failures
* Fix nu-check to respect FILE_PWD
* Fix corrupted spans in source-env shell errors
* Fix up some references to old source
* Remove deprecation message
* Re-introduce deleted tests
Co-authored-by: kubouch <kubouch@gmail.com>
* Remove panic from BlockCommands run function
Instead of panicing, the run method now returns an error to prevent
nushell from unexpected termination.
* Add ability to open command to run with blocks
The open command tries to parse the content of the file
if there is a command called 'from (file ending)'. This works
fine if the command was 'built in' because the run method doesn't
fail in this case. It did fail on a BlockCommand, though.
This change will first probe if the command contains a block and
evaluate it, if this is the case. If there is no block, it will run
the command the same way as before.
* Add test open files with BlockCommands
* Update open.rs
* Adjust file type on open with BlockCommand parser
Co-authored-by: JT <547158+jntrnr@users.noreply.github.com>
* update docs to refer to length instead of count
* rename count to length
* change all occurrences of 'count' to 'length' in tests
* format length command
* from-eml initial ver
* Adding tests for `from-eml`
* Add eml to prepares_and_decorates_filesystem_source_files
* Sort the file order
Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com>
* headers plugin
* Remove plugin
* Add non-functioning headers command
* Add ability to extract headers from first row
* Refactor header extraction
* Rebuild indexmap with proper headers
* Rebuild result properly
* Compiling, probably wrapped too much?
* Refactoring
* Deal with case of empty header cell
* Deal with case of empty header cell
* Fix formatting
* Fix linting, attempt 2.
* Move whole_stream_command(Headers) to more appropriate section
* ... more linting
* Return Err(ShellError...) instead of panic, yield each row instead of entire table
* Insert Column[index] if no header info is found.
* Update error description
* Add initial test
* Add tests for headers command
* Lint test cases in headers
* Change ShellError for headers, Add sample_headers file to utils.rs
* Add empty sheet to test file
* Revert "Add empty sheet to test file"
This reverts commit a4bf38a31d.
* Show error message when given empty table
* Put a sample_data.ods file for testing
This is a copy of the sample_data.xlsx file but in ods format
* Add the from-ods command
Most of the work was doing `rg xlsx` and then copy/paste with light editing
* Add tests for the from-ods command
* Fix failing test
The problem was improper filename sorting in the test `prepares_and_decorates_filesystem_source_files`
* start playing with ways to use the uniq command
* WIP
* Got uniq working, but still need to figure out args issue and add tests
* Add some tests for uniq
* fmt
* remove commented out code
* Add documentation and some additional tests showing uniq values and rows. Also removed args TODO
* add changes that didn't get committed
* whoops, I didn't save the docs correctly...
* fmt
* Add a test for uniq with nested json
* Add another test
* Fix unique-ness when json keys are out of order and make the test json more complicated