nushell/crates/nu-test-support/src/macros.rs

191 lines
5.7 KiB
Rust
Raw Normal View History

#[macro_export]
macro_rules! nu {
(cwd: $cwd:expr, $path:expr, $($part:expr),*) => {{
use $crate::fs::DisplayPath;
let path = format!($path, $(
$part.display_path()
),*);
nu!($cwd, &path)
}};
(cwd: $cwd:expr, $path:expr) => {{
nu!($cwd, $path)
}};
($cwd:expr, $path:expr) => {{
pub use itertools::Itertools;
pub use std::error::Error;
pub use std::io::prelude::*;
pub use std::process::{Command, Stdio};
pub use $crate::NATIVE_PATH_ENV_VAR;
2022-05-01 23:49:31 +02:00
pub fn escape_quote_string(input: String) -> String {
let mut output = String::with_capacity(input.len() + 2);
output.push('"');
for c in input.chars() {
if c == '"' || c == '\\' {
output.push('\\');
}
output.push(c);
}
output.push('"');
output
}
// let commands = &*format!(
// "
// {}
// exit",
// $crate::fs::DisplayPath::display_path(&$path)
// );
let test_bins = $crate::fs::binaries();
let cwd = std::env::current_dir().expect("Could not get current working directory.");
let test_bins = nu_path::canonicalize_with(&test_bins, cwd).unwrap_or_else(|e| {
panic!(
"Couldn't canonicalize dummy binaries path {}: {:?}",
test_bins.display(),
e
)
});
let mut paths = $crate::shell_os_paths();
paths.insert(0, test_bins);
let path = $path.lines().collect::<Vec<_>>().join("; ");
2021-09-10 00:44:22 +02:00
let paths_joined = match std::env::join_paths(paths) {
Ok(all) => all,
Err(_) => panic!("Couldn't join paths for PATH var."),
};
let target_cwd = $crate::fs::in_directory(&$cwd);
let mut process = match Command::new($crate::fs::executable_path())
.env("PWD", &target_cwd)
.current_dir(target_cwd)
.env(NATIVE_PATH_ENV_VAR, paths_joined)
// .arg("--skip-plugins")
// .arg("--no-history")
// .arg("--config-file")
// .arg($crate::fs::DisplayPath::display_path(&$crate::fs::fixtures().join("playground/config/default.toml")))
2022-05-01 23:49:31 +02:00
.arg(format!("-c {}", escape_quote_string($crate::fs::DisplayPath::display_path(&path))))
.stdout(Stdio::piped())
// .stdin(Stdio::piped())
.stderr(Stdio::piped())
.spawn()
{
Ok(child) => child,
Err(why) => panic!("Can't run test {:?} {}", $crate::fs::executable_path(), why.to_string()),
};
// let stdin = process.stdin.as_mut().expect("couldn't open stdin");
// stdin
// .write_all(b"exit\n")
// .expect("couldn't write to stdin");
let output = process
.wait_with_output()
.expect("couldn't read from stdout/stderr");
let out = $crate::macros::read_std(&output.stdout);
let err = String::from_utf8_lossy(&output.stderr);
println!("=== stderr\n{}", err);
$crate::Outcome::new(out,err.into_owned())
}};
}
#[macro_export]
macro_rules! nu_with_plugins {
(cwd: $cwd:expr, $path:expr, $($part:expr),*) => {{
use $crate::fs::DisplayPath;
let path = format!($path, $(
$part.display_path()
),*);
nu_with_plugins!($cwd, &path)
}};
(cwd: $cwd:expr, $path:expr) => {{
nu_with_plugins!($cwd, $path)
}};
($cwd:expr, $path:expr) => {{
pub use std::error::Error;
pub use std::io::prelude::*;
pub use std::process::{Command, Stdio};
pub use $crate::NATIVE_PATH_ENV_VAR;
let commands = &*format!(
"
{}
exit",
$crate::fs::DisplayPath::display_path(&$path)
);
let test_bins = $crate::fs::binaries();
nu-path crate refactor (#3730) * Resolve rebase artifacts * Remove leftover dependencies on removed feature * Remove unnecessary 'pub' * Start taking notes and fooling around * Split canonicalize to two versions; Add TODOs One that takes `relative_to` and one that doesn't. More TODO notes. * Merge absolutize to and rename resolve_dots * Add custom absolutize fn and use it in path expand * Convert a couple of dunce::canonicalize to ours * Update nu-path description * Replace all canonicalize with nu-path version * Remove leftover dunce dependencies * Fix broken autocd with trailing slash Trailing slash is preserved *only* in paths that do not contain "." or "..". This should be fixed in the future to cover all paths but for now it at least covers basic cases. * Use dunce::canonicalize for canonicalizing * Alow cd recovery from non-existent cwd * Disable removed canonicalize functionality tests Remove unused import * Break down nu-path into separate modules * Remove unused public imports * Remove abundant cow mapping * Fix clippy warning * Reformulate old canonicalize tests to expand_path They wouldn't work with the new canonicalize. * Canonicalize also ~ and ndots; Unify path joining Also, add doc comments in nu_path::expansions. * Add comment * Avoid expanding ndots if path is not valid UTF-8 With this change, no lossy path->string conversion should happen in the nu-path crate. * Fmt * Slight expand_tilde refactor; Add doc comments * Start nu-path integration tests * Add tests TODO * Fix docstring typo * Fix some doc strings * Add README for nu-path crate * Add a couple of canonicalize tests * Add nu-path integration tests * Add trim trailing slashes tests * Update nu-path dependency * Remove unused import * Regenerate lockfile
2021-08-28 14:59:09 +02:00
let test_bins = nu_path::canonicalize(&test_bins).unwrap_or_else(|e| {
panic!(
"Couldn't canonicalize dummy binaries path {}: {:?}",
test_bins.display(),
e
)
});
let mut paths = $crate::shell_os_paths();
paths.insert(0, test_bins);
2021-09-10 00:44:22 +02:00
let paths_joined = match std::env::join_paths(paths) {
Ok(all) => all,
Err(_) => panic!("Couldn't join paths for PATH var."),
};
let target_cwd = $crate::fs::in_directory(&$cwd);
let mut process = match Command::new($crate::fs::executable_path())
.env("PWD", &target_cwd) // setting PWD is enough to set cwd
.env(NATIVE_PATH_ENV_VAR, paths_joined)
.stdout(Stdio::piped())
.stdin(Stdio::piped())
.stderr(Stdio::piped())
.spawn()
{
Ok(child) => child,
Err(why) => panic!("Can't run test {}", why.to_string()),
};
let stdin = process.stdin.as_mut().expect("couldn't open stdin");
stdin
.write_all(commands.as_bytes())
.expect("couldn't write to stdin");
stdin.flush()?
let output = process
.wait_with_output()
.expect("couldn't read from stdout/stderr");
Restructure and streamline token expansion (#1123) Restructure and streamline token expansion The purpose of this commit is to streamline the token expansion code, by removing aspects of the code that are no longer relevant, removing pointless duplication, and eliminating the need to pass the same arguments to `expand_syntax`. The first big-picture change in this commit is that instead of a handful of `expand_` functions, which take a TokensIterator and ExpandContext, a smaller number of methods on the `TokensIterator` do the same job. The second big-picture change in this commit is fully eliminating the coloring traits, making coloring a responsibility of the base expansion implementations. This also means that the coloring tracer is merged into the expansion tracer, so you can follow a single expansion and see how the expansion process produced colored tokens. One side effect of this change is that the expander itself is marginally more error-correcting. The error correction works by switching from structured expansion to `BackoffColoringMode` when an unexpected token is found, which guarantees that all spans of the source are colored, but may not be the most optimal error recovery strategy. That said, because `BackoffColoringMode` only extends as far as a closing delimiter (`)`, `]`, `}`) or pipe (`|`), it does result in fairly granular correction strategy. The current code still produces an `Err` (plus a complete list of colored shapes) from the parsing process if any errors are encountered, but this could easily be addressed now that the underlying expansion is error-correcting. This commit also colors any spans that are syntax errors in red, and causes the parser to include some additional information about what tokens were expected at any given point where an error was encountered, so that completions and hinting could be more robust in the future. Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com> Co-authored-by: Andrés N. Robalino <andres@androbtech.com>
2020-01-21 23:45:03 +01:00
let out = $crate::macros::read_std(&output.stdout);
let err = String::from_utf8_lossy(&output.stderr);
Restructure and streamline token expansion (#1123) Restructure and streamline token expansion The purpose of this commit is to streamline the token expansion code, by removing aspects of the code that are no longer relevant, removing pointless duplication, and eliminating the need to pass the same arguments to `expand_syntax`. The first big-picture change in this commit is that instead of a handful of `expand_` functions, which take a TokensIterator and ExpandContext, a smaller number of methods on the `TokensIterator` do the same job. The second big-picture change in this commit is fully eliminating the coloring traits, making coloring a responsibility of the base expansion implementations. This also means that the coloring tracer is merged into the expansion tracer, so you can follow a single expansion and see how the expansion process produced colored tokens. One side effect of this change is that the expander itself is marginally more error-correcting. The error correction works by switching from structured expansion to `BackoffColoringMode` when an unexpected token is found, which guarantees that all spans of the source are colored, but may not be the most optimal error recovery strategy. That said, because `BackoffColoringMode` only extends as far as a closing delimiter (`)`, `]`, `}`) or pipe (`|`), it does result in fairly granular correction strategy. The current code still produces an `Err` (plus a complete list of colored shapes) from the parsing process if any errors are encountered, but this could easily be addressed now that the underlying expansion is error-correcting. This commit also colors any spans that are syntax errors in red, and causes the parser to include some additional information about what tokens were expected at any given point where an error was encountered, so that completions and hinting could be more robust in the future. Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com> Co-authored-by: Andrés N. Robalino <andres@androbtech.com>
2020-01-21 23:45:03 +01:00
println!("=== stderr\n{}", err);
Restructure and streamline token expansion (#1123) Restructure and streamline token expansion The purpose of this commit is to streamline the token expansion code, by removing aspects of the code that are no longer relevant, removing pointless duplication, and eliminating the need to pass the same arguments to `expand_syntax`. The first big-picture change in this commit is that instead of a handful of `expand_` functions, which take a TokensIterator and ExpandContext, a smaller number of methods on the `TokensIterator` do the same job. The second big-picture change in this commit is fully eliminating the coloring traits, making coloring a responsibility of the base expansion implementations. This also means that the coloring tracer is merged into the expansion tracer, so you can follow a single expansion and see how the expansion process produced colored tokens. One side effect of this change is that the expander itself is marginally more error-correcting. The error correction works by switching from structured expansion to `BackoffColoringMode` when an unexpected token is found, which guarantees that all spans of the source are colored, but may not be the most optimal error recovery strategy. That said, because `BackoffColoringMode` only extends as far as a closing delimiter (`)`, `]`, `}`) or pipe (`|`), it does result in fairly granular correction strategy. The current code still produces an `Err` (plus a complete list of colored shapes) from the parsing process if any errors are encountered, but this could easily be addressed now that the underlying expansion is error-correcting. This commit also colors any spans that are syntax errors in red, and causes the parser to include some additional information about what tokens were expected at any given point where an error was encountered, so that completions and hinting could be more robust in the future. Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com> Co-authored-by: Andrés N. Robalino <andres@androbtech.com>
2020-01-21 23:45:03 +01:00
$crate::Outcome::new(out,err.into_owned())
}};
}
Restructure and streamline token expansion (#1123) Restructure and streamline token expansion The purpose of this commit is to streamline the token expansion code, by removing aspects of the code that are no longer relevant, removing pointless duplication, and eliminating the need to pass the same arguments to `expand_syntax`. The first big-picture change in this commit is that instead of a handful of `expand_` functions, which take a TokensIterator and ExpandContext, a smaller number of methods on the `TokensIterator` do the same job. The second big-picture change in this commit is fully eliminating the coloring traits, making coloring a responsibility of the base expansion implementations. This also means that the coloring tracer is merged into the expansion tracer, so you can follow a single expansion and see how the expansion process produced colored tokens. One side effect of this change is that the expander itself is marginally more error-correcting. The error correction works by switching from structured expansion to `BackoffColoringMode` when an unexpected token is found, which guarantees that all spans of the source are colored, but may not be the most optimal error recovery strategy. That said, because `BackoffColoringMode` only extends as far as a closing delimiter (`)`, `]`, `}`) or pipe (`|`), it does result in fairly granular correction strategy. The current code still produces an `Err` (plus a complete list of colored shapes) from the parsing process if any errors are encountered, but this could easily be addressed now that the underlying expansion is error-correcting. This commit also colors any spans that are syntax errors in red, and causes the parser to include some additional information about what tokens were expected at any given point where an error was encountered, so that completions and hinting could be more robust in the future. Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com> Co-authored-by: Andrés N. Robalino <andres@androbtech.com>
2020-01-21 23:45:03 +01:00
pub fn read_std(std: &[u8]) -> String {
let out = String::from_utf8_lossy(std);
let out = out.lines().collect::<Vec<_>>().join("\n");
Restructure and streamline token expansion (#1123) Restructure and streamline token expansion The purpose of this commit is to streamline the token expansion code, by removing aspects of the code that are no longer relevant, removing pointless duplication, and eliminating the need to pass the same arguments to `expand_syntax`. The first big-picture change in this commit is that instead of a handful of `expand_` functions, which take a TokensIterator and ExpandContext, a smaller number of methods on the `TokensIterator` do the same job. The second big-picture change in this commit is fully eliminating the coloring traits, making coloring a responsibility of the base expansion implementations. This also means that the coloring tracer is merged into the expansion tracer, so you can follow a single expansion and see how the expansion process produced colored tokens. One side effect of this change is that the expander itself is marginally more error-correcting. The error correction works by switching from structured expansion to `BackoffColoringMode` when an unexpected token is found, which guarantees that all spans of the source are colored, but may not be the most optimal error recovery strategy. That said, because `BackoffColoringMode` only extends as far as a closing delimiter (`)`, `]`, `}`) or pipe (`|`), it does result in fairly granular correction strategy. The current code still produces an `Err` (plus a complete list of colored shapes) from the parsing process if any errors are encountered, but this could easily be addressed now that the underlying expansion is error-correcting. This commit also colors any spans that are syntax errors in red, and causes the parser to include some additional information about what tokens were expected at any given point where an error was encountered, so that completions and hinting could be more robust in the future. Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com> Co-authored-by: Andrés N. Robalino <andres@androbtech.com>
2020-01-21 23:45:03 +01:00
let out = out.replace("\r\n", "");
out.replace('\n', "")
Restructure and streamline token expansion (#1123) Restructure and streamline token expansion The purpose of this commit is to streamline the token expansion code, by removing aspects of the code that are no longer relevant, removing pointless duplication, and eliminating the need to pass the same arguments to `expand_syntax`. The first big-picture change in this commit is that instead of a handful of `expand_` functions, which take a TokensIterator and ExpandContext, a smaller number of methods on the `TokensIterator` do the same job. The second big-picture change in this commit is fully eliminating the coloring traits, making coloring a responsibility of the base expansion implementations. This also means that the coloring tracer is merged into the expansion tracer, so you can follow a single expansion and see how the expansion process produced colored tokens. One side effect of this change is that the expander itself is marginally more error-correcting. The error correction works by switching from structured expansion to `BackoffColoringMode` when an unexpected token is found, which guarantees that all spans of the source are colored, but may not be the most optimal error recovery strategy. That said, because `BackoffColoringMode` only extends as far as a closing delimiter (`)`, `]`, `}`) or pipe (`|`), it does result in fairly granular correction strategy. The current code still produces an `Err` (plus a complete list of colored shapes) from the parsing process if any errors are encountered, but this could easily be addressed now that the underlying expansion is error-correcting. This commit also colors any spans that are syntax errors in red, and causes the parser to include some additional information about what tokens were expected at any given point where an error was encountered, so that completions and hinting could be more robust in the future. Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com> Co-authored-by: Andrés N. Robalino <andres@androbtech.com>
2020-01-21 23:45:03 +01:00
}