nushell/crates/nu-cli/src/repl.rs

1197 lines
41 KiB
Rust
Raw Normal View History

2022-01-18 20:32:45 +01:00
use crate::{
completions::NuCompleter,
prompt_update,
reedline_config::{add_menus, create_keybindings, KeybindingsMode},
util::{eval_source, get_guaranteed_cwd, report_error, report_error_new},
NuHighlighter, NuValidator, NushellPrompt,
2022-01-18 20:32:45 +01:00
};
use crossterm::cursor::CursorShape;
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
use log::{trace, warn};
2022-01-18 09:48:28 +01:00
use miette::{IntoDiagnostic, Result};
color_config now accepts closures as color values (#7141) # Description Closes #6909. You can now add closures to your `color_config` themes. Whenever a value would be printed with `table`, the closure is run with the value piped-in. The closure must return either a {fg,bg,attr} record or a color name (`'light_red'` etc.). This returned style is used to colour the value. This is entirely backwards-compatible with existing config.nu files. Example code excerpt: ``` let my_theme = { header: green_bold bool: { if $in { 'light_cyan' } else { 'light_red' } } int: purple_bold filesize: { |e| if $e == 0b { 'gray' } else if $e < 1mb { 'purple_bold' } else { 'cyan_bold' } } duration: purple_bold date: { (date now) - $in | if $in > 1wk { 'cyan_bold' } else if $in > 1day { 'green_bold' } else { 'yellow_bold' } } range: yellow_bold string: { if $in =~ '^#\w{6}$' { $in } else { 'white' } } nothing: white ``` Example output with this in effect: ![2022-11-16 12 47 23 AM - style_computer rs_-_nushell_-_VSCodium](https://user-images.githubusercontent.com/83939/201952558-482de05d-69c7-4bf2-91fc-d0964bf71264.png) ![2022-11-16 12 39 41 AM - style_computer rs_-_nushell_-_VSCodium](https://user-images.githubusercontent.com/83939/201952580-2384bb86-b680-40fe-8192-71bae396c738.png) ![2022-11-15 09 21 54 PM - run_external rs_-_nushell_-_VSCodium](https://user-images.githubusercontent.com/83939/201952601-343fc15d-e4a8-4a92-ad89-9a7d17d42748.png) Slightly important notes: * Some color_config names, namely "separator", "empty" and "hints", pipe in `null` instead of a value. * Currently, doing anything non-trivial inside a closure has an understandably big perf hit. I currently do not actually recommend something like `string: { if $in =~ '^#\w{6}$' { $in } else { 'white' } }` for serious work, mainly because of the abundance of string-type data in the world. Nevertheless, lesser-used types like "date" and "duration" work well with this. * I had to do some reorganisation in order to make it possible to call `eval_block()` that late in table rendering. I invented a new struct called "StyleComputer" which holds the engine_state and stack of the initial `table` command (implicit or explicit). * StyleComputer has a `compute()` method which takes a color_config name and a nu value, and always returns the correct Style, so you don't have to worry about A) the color_config value was set at all, B) whether it was set to a closure or not, or C) which default style to use in those cases. * Currently, errors encountered during execution of the closures are thrown in the garbage. Any other ideas are welcome. (Nonetheless, errors result in a huge perf hit when they are encountered. I think what should be done is to assume something terrible happened to the user's config and invalidate the StyleComputer for that `table` run, thus causing subsequent output to just be Style::default().) * More thorough tests are forthcoming - ran into some difficulty using `nu!` to take an alternative config, and for some reason `let-env config =` statements don't seem to work inside `nu!` pipelines(???) * The default config.nu has not been updated to make use of this yet. Do tell if you think I should incorporate that into this. # User-Facing Changes See above. # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace --features=extra -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace --features=extra` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2022-12-17 14:07:56 +01:00
use nu_color_config::StyleComputer;
use nu_engine::{convert_env_values, eval_block, eval_block_with_early_return};
use nu_parser::{lex, parse, trim_quotes_str};
2022-01-18 09:48:28 +01:00
use nu_protocol::{
ast::PathMember,
config::NuCursorShape,
engine::{EngineState, ReplOperation, Stack, StateWorkingSet},
format_duration, BlockId, HistoryFileFormat, PipelineData, PositionalArg, ShellError, Span,
2022-08-18 11:25:52 +02:00
Spanned, Type, Value, VarId,
2022-01-18 09:48:28 +01:00
};
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
use nu_utils::utils::perf;
use reedline::{CursorConfig, DefaultHinter, EditCommand, Emacs, SqliteBackedHistory, Vi};
use std::{
io::{self, Write},
sync::atomic::Ordering,
time::Instant,
};
use sysinfo::SystemExt;
2022-01-18 09:48:28 +01:00
// According to Daniel Imms @Tyriar, we need to do these this way:
// <133 A><prompt><133 B><command><133 C><command output>
// These first two have been moved to prompt_update to get as close as possible to the prompt.
// const PRE_PROMPT_MARKER: &str = "\x1b]133;A\x1b\\";
// const POST_PROMPT_MARKER: &str = "\x1b]133;B\x1b\\";
const PRE_EXECUTE_MARKER: &str = "\x1b]133;C\x1b\\";
// This one is in get_command_finished_marker() now so we can capture the exit codes properly.
// const CMD_FINISHED_MARKER: &str = "\x1b]133;D;{}\x1b\\";
const RESET_APPLICATION_MODE: &str = "\x1b[?1l";
pub fn evaluate_repl(
engine_state: &mut EngineState,
stack: &mut Stack,
nushell_path: &str,
2022-08-18 11:25:52 +02:00
prerun_command: Option<Spanned<String>>,
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
entire_start_time: Instant,
) -> Result<()> {
2022-01-18 09:48:28 +01:00
use reedline::{FileBackedHistory, Reedline, Signal};
let use_color = engine_state.get_config().use_ansi_coloring;
2022-01-18 09:48:28 +01:00
// Guard against invocation without a connected terminal.
// reedline / crossterm event polling will fail without a connected tty
if !atty::is(atty::Stream::Stdin) {
return Err(std::io::Error::new(
std::io::ErrorKind::NotFound,
"Nushell launched as a REPL, but STDIN is not a TTY; either launch in a valid terminal or provide arguments to invoke a script!",
))
.into_diagnostic();
}
2022-01-18 09:48:28 +01:00
let mut entry_num = 0;
let mut nu_prompt = NushellPrompt::new();
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
let start_time = std::time::Instant::now();
2022-01-18 09:48:28 +01:00
// Translate environment variables from Strings to Values
if let Some(e) = convert_env_values(engine_state, stack) {
2022-01-18 09:48:28 +01:00
let working_set = StateWorkingSet::new(engine_state);
report_error(&working_set, &e);
}
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
perf(
"translate env vars",
start_time,
file!(),
line!(),
column!(),
use_color,
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
);
2022-01-18 09:48:28 +01:00
2022-02-26 14:57:45 +01:00
// seed env vars
stack.add_env_var(
"CMD_DURATION_MS".into(),
Reduced LOC by replacing several instances of `Value::Int {}`, `Value::Float{}`, `Value::Bool {}`, and `Value::String {}` with `Value::int()`, `Value::float()`, `Value::boolean()` and `Value::string()` (#7412) # Description While perusing Value.rs, I noticed the `Value::int()`, `Value::float()`, `Value::boolean()` and `Value::string()` constructors, which seem designed to make it easier to construct various Values, but which aren't used often at all in the codebase. So, using a few find-replaces regexes, I increased their usage. This reduces overall LOC because structures like this: ``` Value::Int { val: a, span: head } ``` are changed into ``` Value::int(a, head) ``` and are respected as such by the project's formatter. There are little readability concerns because the second argument to all of these is `span`, and it's almost always extremely obvious which is the span at every callsite. # User-Facing Changes None. # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2022-12-09 17:37:51 +01:00
Value::string("0823", Span::unknown()),
);
Reduced LOC by replacing several instances of `Value::Int {}`, `Value::Float{}`, `Value::Bool {}`, and `Value::String {}` with `Value::int()`, `Value::float()`, `Value::boolean()` and `Value::string()` (#7412) # Description While perusing Value.rs, I noticed the `Value::int()`, `Value::float()`, `Value::boolean()` and `Value::string()` constructors, which seem designed to make it easier to construct various Values, but which aren't used often at all in the codebase. So, using a few find-replaces regexes, I increased their usage. This reduces overall LOC because structures like this: ``` Value::Int { val: a, span: head } ``` are changed into ``` Value::int(a, head) ``` and are respected as such by the project's formatter. There are little readability concerns because the second argument to all of these is `span`, and it's almost always extremely obvious which is the span at every callsite. # User-Facing Changes None. # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2022-12-09 17:37:51 +01:00
stack.add_env_var("LAST_EXIT_CODE".into(), Value::int(0, Span::unknown()));
2022-02-26 14:57:45 +01:00
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
let mut start_time = std::time::Instant::now();
let mut line_editor = Reedline::create();
// Now that reedline is created, get the history session id and store it in engine_state
2023-01-21 14:47:00 +01:00
let hist_sesh = line_editor
.get_history_session_id()
.map(i64::from)
.unwrap_or(0);
engine_state.history_session_id = hist_sesh;
perf(
"setup reedline",
start_time,
file!(),
line!(),
column!(),
use_color,
);
let config = engine_state.get_config();
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
start_time = std::time::Instant::now();
let history_path = crate::config_files::get_history_path(
nushell_path,
engine_state.config.history_file_format,
);
if let Some(history_path) = history_path.as_deref() {
let history: Box<dyn reedline::History> = match engine_state.config.history_file_format {
HistoryFileFormat::PlainText => Box::new(
FileBackedHistory::with_file(
config.max_history_size as usize,
history_path.to_path_buf(),
)
.into_diagnostic()?,
),
HistoryFileFormat::Sqlite => Box::new(
SqliteBackedHistory::with_file(history_path.to_path_buf()).into_diagnostic()?,
),
};
line_editor = line_editor.with_history(history);
};
perf(
"setup history",
start_time,
file!(),
line!(),
column!(),
use_color,
);
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
start_time = std::time::Instant::now();
let sys = sysinfo::System::new();
let show_banner = config.show_banner;
let use_ansi = config.use_ansi_coloring;
if show_banner {
let banner = get_banner(engine_state, stack);
if use_ansi {
println!("{banner}");
} else {
println!("{}", nu_utils::strip_ansi_string_likely(banner));
}
}
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
perf(
"get sysinfo/show banner",
start_time,
file!(),
line!(),
column!(),
use_color,
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
);
2022-08-18 11:25:52 +02:00
if let Some(s) = prerun_command {
eval_source(
engine_state,
stack,
s.item.as_bytes(),
&format!("entry #{entry_num}"),
PipelineData::empty(),
false,
2022-08-18 11:25:52 +02:00
);
engine_state.merge_env(stack, get_guaranteed_cwd(engine_state, stack))?;
}
2022-01-18 09:48:28 +01:00
loop {
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
let loop_start_time = std::time::Instant::now();
let cwd = get_guaranteed_cwd(engine_state, stack);
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
start_time = std::time::Instant::now();
// Before doing anything, merge the environment from the previous REPL iteration into the
// permanent state.
if let Err(err) = engine_state.merge_env(stack, cwd) {
report_error_new(engine_state, &err);
}
perf(
"merge env",
start_time,
file!(),
line!(),
column!(),
use_color,
);
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
start_time = std::time::Instant::now();
//Reset the ctrl-c handler
if let Some(ctrlc) = &mut engine_state.ctrlc {
ctrlc.store(false, Ordering::SeqCst);
}
perf(
"reset ctrlc",
start_time,
file!(),
line!(),
column!(),
use_color,
);
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
start_time = std::time::Instant::now();
// Reset the SIGQUIT handler
if let Some(sig_quit) = engine_state.get_sig_quit() {
sig_quit.store(false, Ordering::SeqCst);
}
perf(
"reset sig_quit",
start_time,
file!(),
line!(),
column!(),
use_color,
);
2022-01-18 09:48:28 +01:00
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
start_time = std::time::Instant::now();
let config = engine_state.get_config();
let engine_reference = std::sync::Arc::new(engine_state.clone());
// Find the configured cursor shapes for each mode
let cursor_config = CursorConfig {
vi_insert: Some(map_nucursorshape_to_cursorshape(
config.cursor_shape_vi_insert,
)),
vi_normal: Some(map_nucursorshape_to_cursorshape(
config.cursor_shape_vi_normal,
)),
emacs: Some(map_nucursorshape_to_cursorshape(config.cursor_shape_emacs)),
};
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
perf(
"get config/cursor config",
start_time,
file!(),
line!(),
column!(),
use_color,
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
);
start_time = std::time::Instant::now();
line_editor = line_editor
2022-01-18 09:48:28 +01:00
.with_highlighter(Box::new(NuHighlighter {
engine_state: engine_reference.clone(),
2022-01-18 09:48:28 +01:00
config: config.clone(),
}))
.with_validator(Box::new(NuValidator {
engine_state: engine_reference.clone(),
2022-01-18 09:48:28 +01:00
}))
2022-02-18 19:54:13 +01:00
.with_completer(Box::new(NuCompleter::new(
engine_reference.clone(),
stack.clone(),
2022-02-18 19:54:13 +01:00
)))
.with_quick_completions(config.quick_completions)
2022-03-03 10:13:44 +01:00
.with_partial_completions(config.partial_completions)
.with_ansi_colors(config.use_ansi_coloring)
.with_cursor_config(cursor_config);
perf(
"reedline builder",
start_time,
file!(),
line!(),
column!(),
use_color,
);
2022-01-27 08:53:23 +01:00
color_config now accepts closures as color values (#7141) # Description Closes #6909. You can now add closures to your `color_config` themes. Whenever a value would be printed with `table`, the closure is run with the value piped-in. The closure must return either a {fg,bg,attr} record or a color name (`'light_red'` etc.). This returned style is used to colour the value. This is entirely backwards-compatible with existing config.nu files. Example code excerpt: ``` let my_theme = { header: green_bold bool: { if $in { 'light_cyan' } else { 'light_red' } } int: purple_bold filesize: { |e| if $e == 0b { 'gray' } else if $e < 1mb { 'purple_bold' } else { 'cyan_bold' } } duration: purple_bold date: { (date now) - $in | if $in > 1wk { 'cyan_bold' } else if $in > 1day { 'green_bold' } else { 'yellow_bold' } } range: yellow_bold string: { if $in =~ '^#\w{6}$' { $in } else { 'white' } } nothing: white ``` Example output with this in effect: ![2022-11-16 12 47 23 AM - style_computer rs_-_nushell_-_VSCodium](https://user-images.githubusercontent.com/83939/201952558-482de05d-69c7-4bf2-91fc-d0964bf71264.png) ![2022-11-16 12 39 41 AM - style_computer rs_-_nushell_-_VSCodium](https://user-images.githubusercontent.com/83939/201952580-2384bb86-b680-40fe-8192-71bae396c738.png) ![2022-11-15 09 21 54 PM - run_external rs_-_nushell_-_VSCodium](https://user-images.githubusercontent.com/83939/201952601-343fc15d-e4a8-4a92-ad89-9a7d17d42748.png) Slightly important notes: * Some color_config names, namely "separator", "empty" and "hints", pipe in `null` instead of a value. * Currently, doing anything non-trivial inside a closure has an understandably big perf hit. I currently do not actually recommend something like `string: { if $in =~ '^#\w{6}$' { $in } else { 'white' } }` for serious work, mainly because of the abundance of string-type data in the world. Nevertheless, lesser-used types like "date" and "duration" work well with this. * I had to do some reorganisation in order to make it possible to call `eval_block()` that late in table rendering. I invented a new struct called "StyleComputer" which holds the engine_state and stack of the initial `table` command (implicit or explicit). * StyleComputer has a `compute()` method which takes a color_config name and a nu value, and always returns the correct Style, so you don't have to worry about A) the color_config value was set at all, B) whether it was set to a closure or not, or C) which default style to use in those cases. * Currently, errors encountered during execution of the closures are thrown in the garbage. Any other ideas are welcome. (Nonetheless, errors result in a huge perf hit when they are encountered. I think what should be done is to assume something terrible happened to the user's config and invalidate the StyleComputer for that `table` run, thus causing subsequent output to just be Style::default().) * More thorough tests are forthcoming - ran into some difficulty using `nu!` to take an alternative config, and for some reason `let-env config =` statements don't seem to work inside `nu!` pipelines(???) * The default config.nu has not been updated to make use of this yet. Do tell if you think I should incorporate that into this. # User-Facing Changes See above. # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace --features=extra -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace --features=extra` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2022-12-17 14:07:56 +01:00
let style_computer = StyleComputer::from_config(engine_state, stack);
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
start_time = std::time::Instant::now();
line_editor = if config.use_ansi_coloring {
color_config now accepts closures as color values (#7141) # Description Closes #6909. You can now add closures to your `color_config` themes. Whenever a value would be printed with `table`, the closure is run with the value piped-in. The closure must return either a {fg,bg,attr} record or a color name (`'light_red'` etc.). This returned style is used to colour the value. This is entirely backwards-compatible with existing config.nu files. Example code excerpt: ``` let my_theme = { header: green_bold bool: { if $in { 'light_cyan' } else { 'light_red' } } int: purple_bold filesize: { |e| if $e == 0b { 'gray' } else if $e < 1mb { 'purple_bold' } else { 'cyan_bold' } } duration: purple_bold date: { (date now) - $in | if $in > 1wk { 'cyan_bold' } else if $in > 1day { 'green_bold' } else { 'yellow_bold' } } range: yellow_bold string: { if $in =~ '^#\w{6}$' { $in } else { 'white' } } nothing: white ``` Example output with this in effect: ![2022-11-16 12 47 23 AM - style_computer rs_-_nushell_-_VSCodium](https://user-images.githubusercontent.com/83939/201952558-482de05d-69c7-4bf2-91fc-d0964bf71264.png) ![2022-11-16 12 39 41 AM - style_computer rs_-_nushell_-_VSCodium](https://user-images.githubusercontent.com/83939/201952580-2384bb86-b680-40fe-8192-71bae396c738.png) ![2022-11-15 09 21 54 PM - run_external rs_-_nushell_-_VSCodium](https://user-images.githubusercontent.com/83939/201952601-343fc15d-e4a8-4a92-ad89-9a7d17d42748.png) Slightly important notes: * Some color_config names, namely "separator", "empty" and "hints", pipe in `null` instead of a value. * Currently, doing anything non-trivial inside a closure has an understandably big perf hit. I currently do not actually recommend something like `string: { if $in =~ '^#\w{6}$' { $in } else { 'white' } }` for serious work, mainly because of the abundance of string-type data in the world. Nevertheless, lesser-used types like "date" and "duration" work well with this. * I had to do some reorganisation in order to make it possible to call `eval_block()` that late in table rendering. I invented a new struct called "StyleComputer" which holds the engine_state and stack of the initial `table` command (implicit or explicit). * StyleComputer has a `compute()` method which takes a color_config name and a nu value, and always returns the correct Style, so you don't have to worry about A) the color_config value was set at all, B) whether it was set to a closure or not, or C) which default style to use in those cases. * Currently, errors encountered during execution of the closures are thrown in the garbage. Any other ideas are welcome. (Nonetheless, errors result in a huge perf hit when they are encountered. I think what should be done is to assume something terrible happened to the user's config and invalidate the StyleComputer for that `table` run, thus causing subsequent output to just be Style::default().) * More thorough tests are forthcoming - ran into some difficulty using `nu!` to take an alternative config, and for some reason `let-env config =` statements don't seem to work inside `nu!` pipelines(???) * The default config.nu has not been updated to make use of this yet. Do tell if you think I should incorporate that into this. # User-Facing Changes See above. # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace --features=extra -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace --features=extra` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2022-12-17 14:07:56 +01:00
line_editor.with_hinter(Box::new({
// As of Nov 2022, "hints" color_config closures only get `null` passed in.
let style = style_computer.compute("hints", &Value::nothing(Span::unknown()));
DefaultHinter::default().with_style(style)
}))
} else {
line_editor.disable_hints()
};
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
perf(
"reedline coloring/style_computer",
start_time,
file!(),
line!(),
column!(),
use_color,
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
);
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
start_time = std::time::Instant::now();
2023-01-21 14:47:00 +01:00
line_editor = add_menus(line_editor, engine_reference, stack, config).unwrap_or_else(|e| {
let working_set = StateWorkingSet::new(engine_state);
report_error(&working_set, &e);
Reedline::create()
});
perf(
"reedline menus",
start_time,
file!(),
line!(),
column!(),
use_color,
);
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
start_time = std::time::Instant::now();
let buffer_editor = if !config.buffer_editor.is_empty() {
Some(config.buffer_editor.clone())
} else {
stack
.get_env_var(engine_state, "EDITOR")
.map(|v| v.as_string().unwrap_or_default())
.filter(|v| !v.is_empty())
.or_else(|| {
stack
.get_env_var(engine_state, "VISUAL")
.map(|v| v.as_string().unwrap_or_default())
.filter(|v| !v.is_empty())
})
};
line_editor = if let Some(buffer_editor) = buffer_editor {
line_editor.with_buffer_editor(buffer_editor, "nu".into())
} else {
line_editor
};
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
perf(
"reedline buffer_editor",
start_time,
file!(),
line!(),
column!(),
use_color,
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
);
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
start_time = std::time::Instant::now();
if config.sync_history_on_enter {
if let Err(e) = line_editor.sync_history() {
warn!("Failed to sync history: {}", e);
}
}
perf(
"sync_history",
start_time,
file!(),
line!(),
column!(),
use_color,
);
2022-01-18 09:48:28 +01:00
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
start_time = std::time::Instant::now();
2022-01-18 20:32:45 +01:00
// Changing the line editor based on the found keybindings
line_editor = match create_keybindings(config) {
Ok(keybindings) => match keybindings {
KeybindingsMode::Emacs(keybindings) => {
let edit_mode = Box::new(Emacs::new(keybindings));
line_editor.with_edit_mode(edit_mode)
}
KeybindingsMode::Vi {
insert_keybindings,
normal_keybindings,
} => {
let edit_mode = Box::new(Vi::new(insert_keybindings, normal_keybindings));
line_editor.with_edit_mode(edit_mode)
}
},
Err(e) => {
let working_set = StateWorkingSet::new(engine_state);
report_error(&working_set, &e);
line_editor
2022-01-18 20:32:45 +01:00
}
2022-01-18 09:48:28 +01:00
};
perf(
"keybindings",
start_time,
file!(),
line!(),
column!(),
use_color,
);
2022-01-18 09:48:28 +01:00
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
start_time = std::time::Instant::now();
// Right before we start our prompt and take input from the user,
// fire the "pre_prompt" hook
if let Some(hook) = config.hooks.pre_prompt.clone() {
if let Err(err) = eval_hook(engine_state, stack, None, vec![], &hook) {
report_error_new(engine_state, &err);
}
}
perf(
"pre-prompt hook",
start_time,
file!(),
line!(),
column!(),
use_color,
);
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
start_time = std::time::Instant::now();
// Next, check all the environment variables they ask for
// fire the "env_change" hook
let config = engine_state.get_config();
if let Err(error) =
eval_env_change_hook(config.hooks.env_change.clone(), engine_state, stack)
{
report_error_new(engine_state, &error)
}
perf(
"env-change hook",
start_time,
file!(),
line!(),
column!(),
use_color,
);
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
start_time = std::time::Instant::now();
let config = engine_state.get_config();
let prompt = prompt_update::update_prompt(config, engine_state, stack, &mut nu_prompt);
perf(
"update_prompt",
start_time,
file!(),
line!(),
column!(),
use_color,
);
2022-01-18 09:48:28 +01:00
entry_num += 1;
if entry_num == 1 && show_banner {
println!(
"Startup Time: {}",
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
format_duration(entire_start_time.elapsed().as_nanos() as i64)
);
}
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
start_time = std::time::Instant::now();
2022-01-18 09:48:28 +01:00
let input = line_editor.read_line(prompt);
let shell_integration = config.shell_integration;
2022-01-18 09:48:28 +01:00
match input {
Ok(Signal::Success(s)) => {
let hostname = sys.host_name();
let history_supports_meta =
matches!(config.history_file_format, HistoryFileFormat::Sqlite);
if history_supports_meta && !s.is_empty() && line_editor.has_last_command_context()
{
line_editor
.update_last_command_context(&|mut c| {
c.start_timestamp = Some(chrono::Utc::now());
c.hostname = hostname.clone();
c.cwd = Some(StateWorkingSet::new(engine_state).get_cwd());
c
})
.into_diagnostic()?; // todo: don't stop repl if error here?
}
engine_state
.repl_buffer_state
.lock()
.expect("repl buffer state mutex")
.replace(line_editor.current_buffer_contents().to_string());
// Right before we start running the code the user gave us,
// fire the "pre_execution" hook
if let Some(hook) = config.hooks.pre_execution.clone() {
if let Err(err) = eval_hook(engine_state, stack, None, vec![], &hook) {
report_error_new(engine_state, &err);
}
}
if shell_integration {
run_ansi_sequence(PRE_EXECUTE_MARKER)?;
}
2022-01-21 15:53:49 +01:00
let start_time = Instant::now();
2022-01-18 09:48:28 +01:00
let tokens = lex(s.as_bytes(), 0, &[], &[], false);
// Check if this is a single call to a directory, if so auto-cd
let cwd = nu_engine::env::current_dir_str(engine_state, stack)?;
2022-01-18 09:48:28 +01:00
let mut orig = s.clone();
if orig.starts_with('`') {
orig = trim_quotes_str(&orig).to_string()
}
let path = nu_path::expand_path_with(&orig, &cwd);
2022-01-18 09:48:28 +01:00
if looks_like_path(&orig) && path.is_dir() && tokens.0.len() == 1 {
2022-01-18 09:48:28 +01:00
// We have an auto-cd
let (path, span) = {
if !path.exists() {
let working_set = StateWorkingSet::new(engine_state);
report_error(
&working_set,
&ShellError::DirectoryNotFound(tokens.0[0].span, None),
2022-01-18 09:48:28 +01:00
);
}
let path = nu_path::canonicalize_with(path, &cwd)
.expect("internal error: cannot canonicalize known path");
(path.to_string_lossy().to_string(), tokens.0[0].span)
};
stack.add_env_var(
"OLDPWD".into(),
Value::String {
val: cwd.clone(),
Protocol: debug_assert!() Span to reflect a valid slice (#6806) Also enforce this by #[non_exhaustive] span such that going forward we cannot, in debug builds (1), construct invalid spans. The motivation for this stems from #6431 where I've seen crashes due to invalid slice indexing. My hope is this will mitigate such senarios 1. https://github.com/nushell/nushell/pull/6431#issuecomment-1278147241 # Description (description of your pull request here) # Tests Make sure you've done the following: - [ ] Add tests that cover your changes, either in the command examples, the crate/tests folder, or in the /tests folder. - [ ] Try to think about corner cases and various ways how your changes could break. Cover them with tests. - [ ] If adding tests is not possible, please document in the PR body a minimal example with steps on how to reproduce so one can verify your change works. Make sure you've run and fixed any issues with these commands: - [x] `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - [ ] `cargo clippy --workspace --features=extra -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - [ ] `cargo test --workspace --features=extra` to check that all the tests pass # Documentation - [ ] If your PR touches a user-facing nushell feature then make sure that there is an entry in the documentation (https://github.com/nushell/nushell.github.io) for the feature, and update it if necessary.
2022-12-03 10:44:12 +01:00
span: Span::unknown(),
},
);
2022-01-18 09:48:28 +01:00
//FIXME: this only changes the current scope, but instead this environment variable
//should probably be a block that loads the information from the state in the overlay
stack.add_env_var(
"PWD".into(),
Value::String {
val: path.clone(),
Protocol: debug_assert!() Span to reflect a valid slice (#6806) Also enforce this by #[non_exhaustive] span such that going forward we cannot, in debug builds (1), construct invalid spans. The motivation for this stems from #6431 where I've seen crashes due to invalid slice indexing. My hope is this will mitigate such senarios 1. https://github.com/nushell/nushell/pull/6431#issuecomment-1278147241 # Description (description of your pull request here) # Tests Make sure you've done the following: - [ ] Add tests that cover your changes, either in the command examples, the crate/tests folder, or in the /tests folder. - [ ] Try to think about corner cases and various ways how your changes could break. Cover them with tests. - [ ] If adding tests is not possible, please document in the PR body a minimal example with steps on how to reproduce so one can verify your change works. Make sure you've run and fixed any issues with these commands: - [x] `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - [ ] `cargo clippy --workspace --features=extra -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - [ ] `cargo test --workspace --features=extra` to check that all the tests pass # Documentation - [ ] If your PR touches a user-facing nushell feature then make sure that there is an entry in the documentation (https://github.com/nushell/nushell.github.io) for the feature, and update it if necessary.
2022-12-03 10:44:12 +01:00
span: Span::unknown(),
2022-01-18 09:48:28 +01:00
},
);
let cwd = Value::String { val: cwd, span };
let shells = stack.get_env_var(engine_state, "NUSHELL_SHELLS");
let mut shells = if let Some(v) = shells {
v.as_list()
.map(|x| x.to_vec())
.unwrap_or_else(|_| vec![cwd])
} else {
vec![cwd]
};
let current_shell = stack.get_env_var(engine_state, "NUSHELL_CURRENT_SHELL");
let current_shell = if let Some(v) = current_shell {
v.as_integer().unwrap_or_default() as usize
} else {
0
};
let last_shell = stack.get_env_var(engine_state, "NUSHELL_LAST_SHELL");
let last_shell = if let Some(v) = last_shell {
v.as_integer().unwrap_or_default() as usize
} else {
0
};
2022-01-18 09:48:28 +01:00
shells[current_shell] = Value::String { val: path, span };
stack.add_env_var("NUSHELL_SHELLS".into(), Value::List { vals: shells, span });
stack.add_env_var(
"NUSHELL_LAST_SHELL".into(),
Value::Int {
val: last_shell as i64,
span,
},
);
} else if !s.trim().is_empty() {
2022-01-18 09:48:28 +01:00
trace!("eval source: {}", s);
eval_source(
engine_state,
stack,
s.as_bytes(),
&format!("entry #{entry_num}"),
PipelineData::empty(),
false,
2022-01-18 09:48:28 +01:00
);
}
let cmd_duration = start_time.elapsed();
stack.add_env_var(
"CMD_DURATION_MS".into(),
Value::String {
val: format!("{}", cmd_duration.as_millis()),
Protocol: debug_assert!() Span to reflect a valid slice (#6806) Also enforce this by #[non_exhaustive] span such that going forward we cannot, in debug builds (1), construct invalid spans. The motivation for this stems from #6431 where I've seen crashes due to invalid slice indexing. My hope is this will mitigate such senarios 1. https://github.com/nushell/nushell/pull/6431#issuecomment-1278147241 # Description (description of your pull request here) # Tests Make sure you've done the following: - [ ] Add tests that cover your changes, either in the command examples, the crate/tests folder, or in the /tests folder. - [ ] Try to think about corner cases and various ways how your changes could break. Cover them with tests. - [ ] If adding tests is not possible, please document in the PR body a minimal example with steps on how to reproduce so one can verify your change works. Make sure you've run and fixed any issues with these commands: - [x] `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - [ ] `cargo clippy --workspace --features=extra -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - [ ] `cargo test --workspace --features=extra` to check that all the tests pass # Documentation - [ ] If your PR touches a user-facing nushell feature then make sure that there is an entry in the documentation (https://github.com/nushell/nushell.github.io) for the feature, and update it if necessary.
2022-12-03 10:44:12 +01:00
span: Span::unknown(),
},
);
if history_supports_meta && !s.is_empty() && line_editor.has_last_command_context()
{
line_editor
.update_last_command_context(&|mut c| {
c.duration = Some(cmd_duration);
c.exit_status = stack
.get_env_var(engine_state, "LAST_EXIT_CODE")
.and_then(|e| e.as_i64().ok());
c
})
.into_diagnostic()?; // todo: don't stop repl if error here?
}
if shell_integration {
run_ansi_sequence(&get_command_finished_marker(stack, engine_state))?;
if let Some(cwd) = stack.get_env_var(engine_state, "PWD") {
let path = cwd.as_string()?;
// Communicate the path as OSC 7 (often used for spawning new tabs in the same dir)
run_ansi_sequence(&format!(
"\x1b]7;file://{}{}{}\x1b\\",
percent_encoding::utf8_percent_encode(
&hostname.unwrap_or_else(|| "localhost".to_string()),
percent_encoding::CONTROLS
),
if path.starts_with('/') { "" } else { "/" },
percent_encoding::utf8_percent_encode(
&path,
percent_encoding::CONTROLS
)
))?;
// Try to abbreviate string for windows title
let maybe_abbrev_path = if let Some(p) = nu_path::home_dir() {
path.replace(&p.as_path().display().to_string(), "~")
} else {
path
};
// Set window title too
// https://tldp.org/HOWTO/Xterm-Title-3.html
// ESC]0;stringBEL -- Set icon name and window title to string
// ESC]1;stringBEL -- Set icon name to string
// ESC]2;stringBEL -- Set window title to string
run_ansi_sequence(&format!("\x1b]2;{maybe_abbrev_path}\x07"))?;
}
run_ansi_sequence(RESET_APPLICATION_MODE)?;
}
let mut ops = engine_state
.repl_operation_queue
.lock()
.expect("repl op queue mutex");
while let Some(op) = ops.pop_front() {
match op {
ReplOperation::Append(s) => line_editor.run_edit_commands(&[
EditCommand::MoveToEnd,
EditCommand::InsertString(s),
]),
ReplOperation::Insert(s) => {
line_editor.run_edit_commands(&[EditCommand::InsertString(s)])
}
ReplOperation::Replace(s) => line_editor
.run_edit_commands(&[EditCommand::Clear, EditCommand::InsertString(s)]),
}
}
2022-01-18 09:48:28 +01:00
}
Ok(Signal::CtrlC) => {
// `Reedline` clears the line content. New prompt is shown
if shell_integration {
run_ansi_sequence(&get_command_finished_marker(stack, engine_state))?;
}
2022-01-18 09:48:28 +01:00
}
Ok(Signal::CtrlD) => {
// When exiting clear to a new line
if shell_integration {
run_ansi_sequence(&get_command_finished_marker(stack, engine_state))?;
}
2022-01-18 09:48:28 +01:00
println!();
break;
}
Err(err) => {
let message = err.to_string();
if !message.contains("duration") {
eprintln!("Error: {err:?}");
// TODO: Identify possible error cases where a hard failure is preferable
// Ignoring and reporting could hide bigger problems
// e.g. https://github.com/nushell/nushell/issues/6452
// Alternatively only allow that expected failures let the REPL loop
2022-01-18 09:48:28 +01:00
}
if shell_integration {
run_ansi_sequence(&get_command_finished_marker(stack, engine_state))?;
}
2022-01-18 09:48:28 +01:00
}
}
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
perf(
"processing line editor input",
start_time,
file!(),
line!(),
column!(),
use_color,
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
);
perf(
"finished repl loop",
loop_start_time,
file!(),
line!(),
column!(),
use_color,
add some startup performance metrics (#7851) # Description This PR changes the old performance logging with `Instant` timers. I'm not sure if this is the best way to do it but it does help reveal where time is being spent on startup. This is what it looks like when you launch nushell with `cargo run -- --log-level info`. I'm using the `info` log level exclusively for performance monitoring at this point. ![image](https://user-images.githubusercontent.com/343840/214372903-fdfa9c99-b846-47f3-8faf-bd6ed98df3a9.png) ## After Startup Since you're in the repl, you can continue running commands. Here's the output of `ls`, for instance. ![image](https://user-images.githubusercontent.com/343840/214373035-4d2f6e2d-5c1d-43d3-b997-51d79d496ba3.png) Note that the above screenshots are in debug mode, so they're much slower than release. # User-Facing Changes # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-24 21:28:59 +01:00
);
2022-01-18 09:48:28 +01:00
}
Ok(())
}
fn map_nucursorshape_to_cursorshape(shape: NuCursorShape) -> CursorShape {
match shape {
NuCursorShape::Block => CursorShape::Block,
NuCursorShape::UnderScore => CursorShape::UnderScore,
NuCursorShape::Line => CursorShape::Line,
}
}
fn get_banner(engine_state: &mut EngineState, stack: &mut Stack) -> String {
let age = match eval_string_with_input(
engine_state,
stack,
None,
"(date now) - ('2019-05-10 09:59:12-0700' | into datetime)",
) {
Ok(Value::Duration { val, .. }) => format_duration(val),
_ => "".to_string(),
};
let banner = format!(
r#"{} __ ,
{} .--()°'.' {}Welcome to {}Nushell{},
{}'|, . ,' {}based on the {}nu{} language,
{} !_-(_\ {}where all data is structured!
Please join our {}Discord{} community at {}https://discord.gg/NtAbbGn{}
Our {}GitHub{} repository is at {}https://github.com/nushell/nushell{}
Our {}Documentation{} is located at {}https://nushell.sh{}
{}Tweet{} us at {}@nu_shell{}
Learn how to remove this at: {}https://nushell.sh/book/configuration.html#remove-welcome-message{}
It's been this long since {}Nushell{}'s first commit:
{}{}
"#,
"\x1b[32m", //start line 1 green
"\x1b[32m", //start line 2
"\x1b[0m", //before welcome
"\x1b[32m", //before nushell
"\x1b[0m", //after nushell
"\x1b[32m", //start line 3
"\x1b[0m", //before based
"\x1b[32m", //before nu
"\x1b[0m", //after nu
"\x1b[32m", //start line 4
"\x1b[0m", //before where
"\x1b[35m", //before Discord purple
"\x1b[0m", //after Discord
"\x1b[35m", //before Discord URL
"\x1b[0m", //after Discord URL
"\x1b[1;32m", //before GitHub green_bold
"\x1b[0m", //after GitHub
"\x1b[1;32m", //before GitHub URL
"\x1b[0m", //after GitHub URL
"\x1b[32m", //before Documentation
"\x1b[0m", //after Documentation
"\x1b[32m", //before Documentation URL
"\x1b[0m", //after Documentation URL
"\x1b[36m", //before Tweet blue
"\x1b[0m", //after Tweet
"\x1b[1;36m", //before @nu_shell cyan_bold
"\x1b[0m", //after @nu_shell
"\x1b[32m", //before Welcome Message
"\x1b[0m", //after Welcome Message
"\x1b[32m", //before Nushell
"\x1b[0m", //after Nushell
age,
"\x1b[0m", //after banner disable
);
banner
}
// Taken from Nana's simple_eval
/// Evaluate a block of Nu code, optionally with input.
/// For example, source="$in * 2" will multiply the value in input by 2.
pub fn eval_string_with_input(
engine_state: &mut EngineState,
stack: &mut Stack,
input: Option<Value>,
source: &str,
) -> Result<Value, ShellError> {
let (block, delta) = {
let mut working_set = StateWorkingSet::new(engine_state);
let (output, _) = parse(&mut working_set, None, source.as_bytes(), false, &[]);
(output, working_set.render())
};
engine_state.merge_delta(delta)?;
let input_as_pipeline_data = match input {
Some(input) => PipelineData::Value(input, None),
None => PipelineData::empty(),
};
eval_block(
engine_state,
stack,
&block,
input_as_pipeline_data,
false,
true,
)
.map(|x| x.into_value(Span::unknown()))
}
pub fn get_command_finished_marker(stack: &Stack, engine_state: &EngineState) -> String {
let exit_code = stack
.get_env_var(engine_state, "LAST_EXIT_CODE")
.and_then(|e| e.as_i64().ok());
format!("\x1b]133;D;{}\x1b\\", exit_code.unwrap_or(0))
}
pub fn eval_env_change_hook(
env_change_hook: Option<Value>,
engine_state: &mut EngineState,
stack: &mut Stack,
) -> Result<(), ShellError> {
if let Some(hook) = env_change_hook {
match hook {
Value::Record {
cols: env_names,
vals: hook_values,
..
} => {
for (env_name, hook_value) in env_names.iter().zip(hook_values.iter()) {
let before = engine_state
.previous_env_vars
.get(env_name)
.cloned()
.unwrap_or_default();
let after = stack
.get_env_var(engine_state, env_name)
.unwrap_or_default();
if before != after {
eval_hook(
engine_state,
stack,
None,
vec![("$before".into(), before), ("$after".into(), after.clone())],
hook_value,
)?;
engine_state
.previous_env_vars
.insert(env_name.to_string(), after);
}
}
}
x => {
return Err(ShellError::TypeMismatch {
err_message: "record for the 'env_change' hook".to_string(),
span: x.span()?,
});
}
}
}
Ok(())
}
pub fn eval_hook(
engine_state: &mut EngineState,
stack: &mut Stack,
input: Option<PipelineData>,
arguments: Vec<(String, Value)>,
value: &Value,
) -> Result<PipelineData, ShellError> {
let value_span = value.span()?;
// Hooks can optionally be a record in this form:
// {
// condition: {|before, after| ... } # block that evaluates to true/false
// code: # block or a string
// }
// The condition block will be run to check whether the main hook (in `code`) should be run.
// If it returns true (the default if a condition block is not specified), the hook should be run.
let condition_path = PathMember::String {
val: "condition".to_string(),
span: value_span,
};
let mut output = PipelineData::empty();
let code_path = PathMember::String {
val: "code".to_string(),
span: value_span,
};
match value {
Value::List { vals, .. } => {
for val in vals {
eval_hook(engine_state, stack, None, arguments.clone(), val)?;
}
}
Value::Record { .. } => {
Make `get` hole errors and cell path hole errors identical (improvement on #7002) (#7647) # Description This closes #7498, as well as fixes an issue reported in https://github.com/nushell/nushell/pull/7002#issuecomment-1368340773 BEFORE: ``` 〉[{foo: 'bar'} {}] | get foo Error: nu::shell::column_not_found (link) × Cannot find column ╭─[entry #5:1:1] 1 │ [{foo: 'bar'} {}] | get foo · ────────┬──────── ─┬─ · │ ╰── value originates here · ╰── cannot find column 'Empty cell' ╰──── 〉[{foo: 'bar'} {}].foo ╭───┬─────╮ │ 0 │ bar │ │ 1 │ │ ╰───┴─────╯ ``` AFTER: ``` 〉[{foo: 'bar'} {}] | get foo Error: nu::shell::column_not_found (link) × Cannot find column ╭─[entry #1:1:1] 1 │ [{foo: 'bar'} {}] | get foo · ─┬ ─┬─ · │ ╰── cannot find column 'foo' · ╰── value originates here ╰──── 〉[{foo: 'bar'} {}].foo Error: nu::shell::column_not_found (link) × Cannot find column ╭─[entry #3:1:1] 1 │ [{foo: 'bar'} {}].foo · ─┬ ─┬─ · │ ╰── cannot find column 'foo' · ╰── value originates here ╰──── ``` EDIT: This also changes the semantics of `get`/`select` `-i` somewhat. I've decided to leave it like this because it works more intuitively with `default` and `compact`. BEFORE: ``` 〉[{a:1} {b:2} {a:3}] | select -i foo | to nuon null ``` AFTER: ``` 〉[{a:1} {b:2} {a:3}] | select -i foo | to nuon [[foo]; [null], [null], [null]] ``` # User-Facing Changes See above. EDIT: the issue with holes in cases like ` [{foo: 'bar'} {}].foo.0` versus ` [{foo: 'bar'} {}].0.foo` has been resolved. # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-02 23:45:43 +01:00
let do_run_hook = if let Ok(condition) =
value
.clone()
.follow_cell_path(&[condition_path], false, false)
{
match condition {
Value::Block {
val: block_id,
span: block_span,
..
}
| Value::Closure {
val: block_id,
span: block_span,
..
} => {
match run_hook_block(
engine_state,
stack,
block_id,
None,
arguments.clone(),
block_span,
) {
Ok(pipeline_data) => {
if let PipelineData::Value(Value::Bool { val, .. }, ..) =
pipeline_data
{
val
} else {
return Err(ShellError::UnsupportedConfigValue(
"boolean output".to_string(),
"other PipelineData variant".to_string(),
block_span,
));
}
}
Make `get` hole errors and cell path hole errors identical (improvement on #7002) (#7647) # Description This closes #7498, as well as fixes an issue reported in https://github.com/nushell/nushell/pull/7002#issuecomment-1368340773 BEFORE: ``` 〉[{foo: 'bar'} {}] | get foo Error: nu::shell::column_not_found (link) × Cannot find column ╭─[entry #5:1:1] 1 │ [{foo: 'bar'} {}] | get foo · ────────┬──────── ─┬─ · │ ╰── value originates here · ╰── cannot find column 'Empty cell' ╰──── 〉[{foo: 'bar'} {}].foo ╭───┬─────╮ │ 0 │ bar │ │ 1 │ │ ╰───┴─────╯ ``` AFTER: ``` 〉[{foo: 'bar'} {}] | get foo Error: nu::shell::column_not_found (link) × Cannot find column ╭─[entry #1:1:1] 1 │ [{foo: 'bar'} {}] | get foo · ─┬ ─┬─ · │ ╰── cannot find column 'foo' · ╰── value originates here ╰──── 〉[{foo: 'bar'} {}].foo Error: nu::shell::column_not_found (link) × Cannot find column ╭─[entry #3:1:1] 1 │ [{foo: 'bar'} {}].foo · ─┬ ─┬─ · │ ╰── cannot find column 'foo' · ╰── value originates here ╰──── ``` EDIT: This also changes the semantics of `get`/`select` `-i` somewhat. I've decided to leave it like this because it works more intuitively with `default` and `compact`. BEFORE: ``` 〉[{a:1} {b:2} {a:3}] | select -i foo | to nuon null ``` AFTER: ``` 〉[{a:1} {b:2} {a:3}] | select -i foo | to nuon [[foo]; [null], [null], [null]] ``` # User-Facing Changes See above. EDIT: the issue with holes in cases like ` [{foo: 'bar'} {}].foo.0` versus ` [{foo: 'bar'} {}].0.foo` has been resolved. # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-02 23:45:43 +01:00
Err(err) => {
return Err(err);
}
}
}
Make `get` hole errors and cell path hole errors identical (improvement on #7002) (#7647) # Description This closes #7498, as well as fixes an issue reported in https://github.com/nushell/nushell/pull/7002#issuecomment-1368340773 BEFORE: ``` 〉[{foo: 'bar'} {}] | get foo Error: nu::shell::column_not_found (link) × Cannot find column ╭─[entry #5:1:1] 1 │ [{foo: 'bar'} {}] | get foo · ────────┬──────── ─┬─ · │ ╰── value originates here · ╰── cannot find column 'Empty cell' ╰──── 〉[{foo: 'bar'} {}].foo ╭───┬─────╮ │ 0 │ bar │ │ 1 │ │ ╰───┴─────╯ ``` AFTER: ``` 〉[{foo: 'bar'} {}] | get foo Error: nu::shell::column_not_found (link) × Cannot find column ╭─[entry #1:1:1] 1 │ [{foo: 'bar'} {}] | get foo · ─┬ ─┬─ · │ ╰── cannot find column 'foo' · ╰── value originates here ╰──── 〉[{foo: 'bar'} {}].foo Error: nu::shell::column_not_found (link) × Cannot find column ╭─[entry #3:1:1] 1 │ [{foo: 'bar'} {}].foo · ─┬ ─┬─ · │ ╰── cannot find column 'foo' · ╰── value originates here ╰──── ``` EDIT: This also changes the semantics of `get`/`select` `-i` somewhat. I've decided to leave it like this because it works more intuitively with `default` and `compact`. BEFORE: ``` 〉[{a:1} {b:2} {a:3}] | select -i foo | to nuon null ``` AFTER: ``` 〉[{a:1} {b:2} {a:3}] | select -i foo | to nuon [[foo]; [null], [null], [null]] ``` # User-Facing Changes See above. EDIT: the issue with holes in cases like ` [{foo: 'bar'} {}].foo.0` versus ` [{foo: 'bar'} {}].0.foo` has been resolved. # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-02 23:45:43 +01:00
other => {
return Err(ShellError::UnsupportedConfigValue(
"block".to_string(),
format!("{}", other.get_type()),
other.span()?,
));
}
}
} else {
// always run the hook
true
};
if do_run_hook {
Make `get` hole errors and cell path hole errors identical (improvement on #7002) (#7647) # Description This closes #7498, as well as fixes an issue reported in https://github.com/nushell/nushell/pull/7002#issuecomment-1368340773 BEFORE: ``` 〉[{foo: 'bar'} {}] | get foo Error: nu::shell::column_not_found (link) × Cannot find column ╭─[entry #5:1:1] 1 │ [{foo: 'bar'} {}] | get foo · ────────┬──────── ─┬─ · │ ╰── value originates here · ╰── cannot find column 'Empty cell' ╰──── 〉[{foo: 'bar'} {}].foo ╭───┬─────╮ │ 0 │ bar │ │ 1 │ │ ╰───┴─────╯ ``` AFTER: ``` 〉[{foo: 'bar'} {}] | get foo Error: nu::shell::column_not_found (link) × Cannot find column ╭─[entry #1:1:1] 1 │ [{foo: 'bar'} {}] | get foo · ─┬ ─┬─ · │ ╰── cannot find column 'foo' · ╰── value originates here ╰──── 〉[{foo: 'bar'} {}].foo Error: nu::shell::column_not_found (link) × Cannot find column ╭─[entry #3:1:1] 1 │ [{foo: 'bar'} {}].foo · ─┬ ─┬─ · │ ╰── cannot find column 'foo' · ╰── value originates here ╰──── ``` EDIT: This also changes the semantics of `get`/`select` `-i` somewhat. I've decided to leave it like this because it works more intuitively with `default` and `compact`. BEFORE: ``` 〉[{a:1} {b:2} {a:3}] | select -i foo | to nuon null ``` AFTER: ``` 〉[{a:1} {b:2} {a:3}] | select -i foo | to nuon [[foo]; [null], [null], [null]] ``` # User-Facing Changes See above. EDIT: the issue with holes in cases like ` [{foo: 'bar'} {}].foo.0` versus ` [{foo: 'bar'} {}].0.foo` has been resolved. # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2023-01-02 23:45:43 +01:00
match value.clone().follow_cell_path(&[code_path], false, false)? {
Value::String {
val,
span: source_span,
} => {
let (block, delta, vars) = {
let mut working_set = StateWorkingSet::new(engine_state);
let mut vars: Vec<(VarId, Value)> = vec![];
for (name, val) in arguments {
let var_id = working_set.add_variable(
name.as_bytes().to_vec(),
val.span()?,
Type::Any,
false,
);
vars.push((var_id, val));
}
let (output, err) =
parse(&mut working_set, Some("hook"), val.as_bytes(), false, &[]);
if let Some(err) = err {
report_error(&working_set, &err);
return Err(ShellError::UnsupportedConfigValue(
"valid source code".into(),
"source code with syntax errors".into(),
source_span,
));
}
(output, working_set.render(), vars)
};
engine_state.merge_delta(delta)?;
let input = PipelineData::empty();
let var_ids: Vec<VarId> = vars
.into_iter()
.map(|(var_id, val)| {
stack.add_var(var_id, val);
var_id
})
.collect();
match eval_block(engine_state, stack, &block, input, false, false) {
Ok(pipeline_data) => {
output = pipeline_data;
}
Err(err) => {
report_error_new(engine_state, &err);
}
}
for var_id in var_ids.iter() {
stack.vars.remove(var_id);
}
}
Value::Block {
val: block_id,
span: block_span,
..
} => {
run_hook_block(
engine_state,
stack,
block_id,
input,
arguments,
block_span,
)?;
}
Value::Closure {
val: block_id,
span: block_span,
..
} => {
run_hook_block(
engine_state,
stack,
block_id,
input,
arguments,
block_span,
)?;
}
other => {
return Err(ShellError::UnsupportedConfigValue(
"block or string".to_string(),
format!("{}", other.get_type()),
other.span()?,
));
}
}
}
}
Value::Block {
val: block_id,
span: block_span,
..
} => {
output = run_hook_block(
engine_state,
stack,
*block_id,
input,
arguments,
*block_span,
)?;
}
Value::Closure {
val: block_id,
span: block_span,
..
} => {
output = run_hook_block(
engine_state,
stack,
*block_id,
input,
arguments,
*block_span,
)?;
}
other => {
return Err(ShellError::UnsupportedConfigValue(
"block, record, or list of records".into(),
format!("{}", other.get_type()),
other.span()?,
));
}
}
let cwd = get_guaranteed_cwd(engine_state, stack);
engine_state.merge_env(stack, cwd)?;
Ok(output)
}
color_config now accepts closures as color values (#7141) # Description Closes #6909. You can now add closures to your `color_config` themes. Whenever a value would be printed with `table`, the closure is run with the value piped-in. The closure must return either a {fg,bg,attr} record or a color name (`'light_red'` etc.). This returned style is used to colour the value. This is entirely backwards-compatible with existing config.nu files. Example code excerpt: ``` let my_theme = { header: green_bold bool: { if $in { 'light_cyan' } else { 'light_red' } } int: purple_bold filesize: { |e| if $e == 0b { 'gray' } else if $e < 1mb { 'purple_bold' } else { 'cyan_bold' } } duration: purple_bold date: { (date now) - $in | if $in > 1wk { 'cyan_bold' } else if $in > 1day { 'green_bold' } else { 'yellow_bold' } } range: yellow_bold string: { if $in =~ '^#\w{6}$' { $in } else { 'white' } } nothing: white ``` Example output with this in effect: ![2022-11-16 12 47 23 AM - style_computer rs_-_nushell_-_VSCodium](https://user-images.githubusercontent.com/83939/201952558-482de05d-69c7-4bf2-91fc-d0964bf71264.png) ![2022-11-16 12 39 41 AM - style_computer rs_-_nushell_-_VSCodium](https://user-images.githubusercontent.com/83939/201952580-2384bb86-b680-40fe-8192-71bae396c738.png) ![2022-11-15 09 21 54 PM - run_external rs_-_nushell_-_VSCodium](https://user-images.githubusercontent.com/83939/201952601-343fc15d-e4a8-4a92-ad89-9a7d17d42748.png) Slightly important notes: * Some color_config names, namely "separator", "empty" and "hints", pipe in `null` instead of a value. * Currently, doing anything non-trivial inside a closure has an understandably big perf hit. I currently do not actually recommend something like `string: { if $in =~ '^#\w{6}$' { $in } else { 'white' } }` for serious work, mainly because of the abundance of string-type data in the world. Nevertheless, lesser-used types like "date" and "duration" work well with this. * I had to do some reorganisation in order to make it possible to call `eval_block()` that late in table rendering. I invented a new struct called "StyleComputer" which holds the engine_state and stack of the initial `table` command (implicit or explicit). * StyleComputer has a `compute()` method which takes a color_config name and a nu value, and always returns the correct Style, so you don't have to worry about A) the color_config value was set at all, B) whether it was set to a closure or not, or C) which default style to use in those cases. * Currently, errors encountered during execution of the closures are thrown in the garbage. Any other ideas are welcome. (Nonetheless, errors result in a huge perf hit when they are encountered. I think what should be done is to assume something terrible happened to the user's config and invalidate the StyleComputer for that `table` run, thus causing subsequent output to just be Style::default().) * More thorough tests are forthcoming - ran into some difficulty using `nu!` to take an alternative config, and for some reason `let-env config =` statements don't seem to work inside `nu!` pipelines(???) * The default config.nu has not been updated to make use of this yet. Do tell if you think I should incorporate that into this. # User-Facing Changes See above. # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace --features=extra -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace --features=extra` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date.
2022-12-17 14:07:56 +01:00
fn run_hook_block(
engine_state: &EngineState,
stack: &mut Stack,
block_id: BlockId,
optional_input: Option<PipelineData>,
arguments: Vec<(String, Value)>,
span: Span,
) -> Result<PipelineData, ShellError> {
let block = engine_state.get_block(block_id);
let input = optional_input.unwrap_or_else(PipelineData::empty);
let mut callee_stack = stack.gather_captures(&block.captures);
for (idx, PositionalArg { var_id, .. }) in
block.signature.required_positional.iter().enumerate()
{
if let Some(var_id) = var_id {
if let Some(arg) = arguments.get(idx) {
callee_stack.add_var(*var_id, arg.1.clone())
} else {
return Err(ShellError::IncompatibleParametersSingle {
msg: "This hook block has too many parameters".into(),
span,
});
}
}
}
2023-01-21 14:47:00 +01:00
let pipeline_data =
eval_block_with_early_return(engine_state, &mut callee_stack, block, input, false, false)?;
2023-01-21 14:47:00 +01:00
if let PipelineData::Value(Value::Error { error }, _) = pipeline_data {
return Err(error);
}
2023-01-21 14:47:00 +01:00
// If all went fine, preserve the environment of the called block
let caller_env_vars = stack.get_env_var_names(engine_state);
2023-01-21 14:47:00 +01:00
// remove env vars that are present in the caller but not in the callee
// (the callee hid them)
for var in caller_env_vars.iter() {
if !callee_stack.has_env_var(engine_state, var) {
stack.remove_env_var(engine_state, var);
}
}
2023-01-21 14:47:00 +01:00
// add new env vars from callee to caller
for (var, value) in callee_stack.get_stack_env_vars() {
stack.add_env_var(var, value);
}
Ok(pipeline_data)
}
fn run_ansi_sequence(seq: &str) -> Result<(), ShellError> {
io::stdout().write_all(seq.as_bytes()).map_err(|e| {
ShellError::GenericError(
"Error writing ansi sequence".into(),
e.to_string(),
Some(Span::unknown()),
None,
Vec::new(),
)
})?;
io::stdout().flush().map_err(|e| {
ShellError::GenericError(
"Error flushing stdio".into(),
e.to_string(),
Protocol: debug_assert!() Span to reflect a valid slice (#6806) Also enforce this by #[non_exhaustive] span such that going forward we cannot, in debug builds (1), construct invalid spans. The motivation for this stems from #6431 where I've seen crashes due to invalid slice indexing. My hope is this will mitigate such senarios 1. https://github.com/nushell/nushell/pull/6431#issuecomment-1278147241 # Description (description of your pull request here) # Tests Make sure you've done the following: - [ ] Add tests that cover your changes, either in the command examples, the crate/tests folder, or in the /tests folder. - [ ] Try to think about corner cases and various ways how your changes could break. Cover them with tests. - [ ] If adding tests is not possible, please document in the PR body a minimal example with steps on how to reproduce so one can verify your change works. Make sure you've run and fixed any issues with these commands: - [x] `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - [ ] `cargo clippy --workspace --features=extra -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - [ ] `cargo test --workspace --features=extra` to check that all the tests pass # Documentation - [ ] If your PR touches a user-facing nushell feature then make sure that there is an entry in the documentation (https://github.com/nushell/nushell.github.io) for the feature, and update it if necessary.
2022-12-03 10:44:12 +01:00
Some(Span::unknown()),
None,
Vec::new(),
)
})
}
// Absolute paths with a drive letter, like 'C:', 'D:\', 'E:\foo'
#[cfg(windows)]
static DRIVE_PATH_REGEX: once_cell::sync::Lazy<fancy_regex::Regex> =
once_cell::sync::Lazy::new(|| {
fancy_regex::Regex::new(r"^[a-zA-Z]:[/\\]?").expect("Internal error: regex creation")
});
// A best-effort "does this string look kinda like a path?" function to determine whether to auto-cd
fn looks_like_path(orig: &str) -> bool {
#[cfg(windows)]
{
if DRIVE_PATH_REGEX.is_match(orig).unwrap_or(false) {
return true;
}
}
orig.starts_with('.')
|| orig.starts_with('~')
|| orig.starts_with('/')
|| orig.starts_with('\\')
}
#[test]
fn looks_like_path_windows_drive_path_works() {
let on_windows = cfg!(windows);
assert_eq!(looks_like_path("C:"), on_windows);
assert_eq!(looks_like_path("D:\\"), on_windows);
assert_eq!(looks_like_path("E:/"), on_windows);
assert_eq!(looks_like_path("F:\\some_dir"), on_windows);
assert_eq!(looks_like_path("G:/some_dir"), on_windows);
}