# Description
I found a bunch of issues relating to the specialized reimplementation
of `print()` that's done in `nu-cli` and it just didn't seem necessary.
So I tried to unify the behavior reasonably. `PipelineData::print()`
already handles the call to `table` and it even has a `no_newline`
option.
One of the most major issues before was that we were using the value
iterator, and then converting to string, and then printing each with
newlines. This doesn't work well for an external stream, because its
iterator ends up creating `Value::binary()` with each buffer... so we
were doing lossy UTF-8 conversion on those and then printing them with
newlines, which was very weird:

You can see the random newline inserted in a break between buffers, but
this would be even worse if it were on a multibyte UTF-8 character. You
can produce this by writing a large amount of text to a text file, and
then doing `nu -c 'open file.txt'` - in my case I just wrote `^find .`;
it just has to be large enough to trigger a buffer break.
Using `print()` instead led to a new issue though, because it doesn't
abort on errors. This is so that certain commands can produce a stream
of errors and have those all printed. There are tests for e.g. `rm` that
depend on this behavior. I assume we want to keep that, so instead I
made my target `BufferedReader`, and had that fuse closed if an error
was encountered. I can't imagine we want to keep reading from a wrapped
I/O stream if an error occurs; more often than not the error isn't going
to magically resolve itself, it's not going to be a different error each
time, and it's just going to lead to an infinite stream of the same
error.
The test that broke without that was `open . | lines`, because `lines`
doesn't fuse closed on error. But I don't know if it's expected or not
for it to do that, so I didn't target that.
I think this PR makes things better but I'll keep looking for ways to
improve on how errors and streams interact, especially trying to
eliminate cases where infinite error loops can happen.
# User-Facing Changes
- **Breaking**: `BufferedReader` changes + no more public fields
- A raw I/O stream from e.g. `open` won't produce infinite errors
anymore, but I consider that to be a plus
- the implicit `print` on script output is the same as the normal one
now
# Tests + Formatting
Everything passes but I didn't add anything specific.
# Description
Convert these ShellError variants to named fields:
* CreateNotPossible
* MoveNotPossibleSingle
* DirectoryNotFoundCustom
* DirectoryNotFound
* NotADirectory
* OutOfMemoryError
* PermissionDeniedError
* IOErrorSpanned
* IOError
* IOInterrupted
Also place the `span` field of `DirectoryNotFound` last to match other
errors.
Part of #10700 (almost half done!)
# User-Facing Changes
None
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
N/A
src/main.rs has a dependency on BufferedReader
which is currently located in nu_command.
I am moving BufferedReader to a more relevant
location (crate) which will allow / eliminate main's dependency
on nu_command in a benchmark / testing environment...
now that @rgwood has landed benches I want
to start experimenting with benchmarks related
to the parser.
For benchmark purposes when dealing with parsing
you need a very simple set of commands that show
how well the parser is doing, in other words
just the core commands... Not all of nu_command...
Having a smaller nu binary when running the benchmark CI
would enable building nushell quickly, yet still show us
how well the parser is performing...
Once this PR lands the only dependency main will have
on nu_command is create_default_context ---
meaning for benchmark purposes we can swap in a tiny
crate of commands instead of the gigantic nu_command
which has its "own" create_default_context...
It will also enable other crates going forward to
use BufferedReader. Right now it is not accessible
to other lower level crates because it is located in a
"top of the stack crate".