Files
nushell/crates/nu-plugin-core/src/util/sequence.rs
Devyn Cairns 0c4d5330ee Split the plugin crate (#12563)
# Description

This breaks `nu-plugin` up into four crates:

- `nu-plugin-protocol`: just the type definitions for the protocol, no
I/O. If someone wanted to wire up something more bare metal, maybe for
async I/O, they could use this.
- `nu-plugin-core`: the shared stuff between engine/plugin. Less stable
interface.
- `nu-plugin-engine`: everything required for the engine to talk to
plugins. Less stable interface.
- `nu-plugin`: everything required for the plugin to talk to the engine,
what plugin developers use. Should be the most stable interface.

No changes are made to the interface exposed by `nu-plugin` - it should
all still be there. Re-exports from `nu-plugin-protocol` or
`nu-plugin-core` are used as required. Plugins shouldn't ever have to
use those crates directly.

This should be somewhat faster to compile as `nu-plugin-engine` and
`nu-plugin` can compile in parallel, and the engine doesn't need
`nu-plugin` and plugins don't need `nu-plugin-engine` (except for test
support), so that should reduce what needs to be compiled too.

The only significant change here other than splitting stuff up was to
break the `source` out of `PluginCustomValue` and create a new
`PluginCustomValueWithSource` type that contains that instead. One bonus
of that is we get rid of the option and it's now more type-safe, but it
also means that the logic for that stuff (actually running the plugin
for custom value ops) can live entirely within the `nu-plugin-engine`
crate.

# User-Facing Changes
- New crates.
- Added `local-socket` feature for `nu` to try to make it possible to
compile without that support if needed.

# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
2024-04-27 12:08:12 -05:00

65 lines
2.2 KiB
Rust

use nu_protocol::ShellError;
use std::sync::atomic::{AtomicUsize, Ordering::Relaxed};
/// Implements an atomically incrementing sequential series of numbers
#[derive(Debug, Default)]
pub struct Sequence(AtomicUsize);
impl Sequence {
/// Return the next available id from a sequence, returning an error on overflow
#[track_caller]
pub fn next(&self) -> Result<usize, ShellError> {
// It's totally safe to use Relaxed ordering here, as there aren't other memory operations
// that depend on this value having been set for safety
//
// We're only not using `fetch_add` so that we can check for overflow, as wrapping with the
// identifier would lead to a serious bug - however unlikely that is.
self.0
.fetch_update(Relaxed, Relaxed, |current| current.checked_add(1))
.map_err(|_| ShellError::NushellFailedHelp {
msg: "an accumulator for identifiers overflowed".into(),
help: format!("see {}", std::panic::Location::caller()),
})
}
}
#[test]
fn output_is_sequential() {
let sequence = Sequence::default();
for (expected, generated) in (0..1000).zip(std::iter::repeat_with(|| sequence.next())) {
assert_eq!(expected, generated.expect("error in sequence"));
}
}
#[test]
fn output_is_unique_even_under_contention() {
let sequence = Sequence::default();
std::thread::scope(|scope| {
// Spawn four threads, all advancing the sequence simultaneously
let threads = (0..4)
.map(|_| {
scope.spawn(|| {
(0..100000)
.map(|_| sequence.next())
.collect::<Result<Vec<_>, _>>()
})
})
.collect::<Vec<_>>();
// Collect all of the results into a single flat vec
let mut results = threads
.into_iter()
.flat_map(|thread| thread.join().expect("panicked").expect("error"))
.collect::<Vec<usize>>();
// Check uniqueness
results.sort();
let initial_length = results.len();
results.dedup();
let deduplicated_length = results.len();
assert_eq!(initial_length, deduplicated_length);
})
}