nushell/crates/nu-command/tests/commands/database/into_sqlite.rs

486 lines
16 KiB
Rust
Raw Normal View History

Fix memory consumption of into sqlite (#10232) # Description Currently, the `into sqlite` command collects the entire input stream into a single Value, which soaks up the entire input into memory, before it ever tries to write anything to the DB. This is very problematic for large inputs; for example, I tried transforming a multi-gigabyte CSV file into SQLite, and before I knew what was happening, my system's memory was completely exhausted, and I had to hard reboot to recover. This PR fixes this problem by working directly with the pipeline stream, inserting into the DB as values are read from the stream. In order to facilitate working with the stream directly, I introduced a new `Table` struct to store the connection and a few configuration parameters, as well as to make it easier to lazily create the table on the first read value. In addition to the purely functional fixes, a few other changes were made to the serialization and user facing behavior. ### Serialization Much of the preexisting code was focused on generating the exact text needed for a SQL statement. This is unneeded and less safe than using the `rusqlite` crate's serialization for native Rust types along with prepared statements. ### User-Facing Changes Currently, the command is very liberal in the input types it accepts. The strategy is basically if it is a record, try to follow its structure and make an analogous SQL row, which is pretty reasonable. However, when it's not a record, it basically tries to guess what the user wanted and just makes a single column table and serializes the value into that one column, whatever type it may be. This has been changed so that it only accepts records as input. If the user wants to serialize non-record types into SQL, then they must explicitly opt into doing this by constructing a record or table with it first. For a utility for inserting data into SQL, I think it makes more sense to let the user choose how to convert their data, rather than make a choice for them that may surprise them. However, I understand this may be a controversial change. If the maintainers don't agree, I can change this back. #### Long switch names The `file_name` and `table_name` long form switches are currently snake_case and expect to be as such at the command line. These have been changed to kebab-case to be more conventional. # Tests + Formatting To test the memory consumption, I used [this publicly available index of all Wikipedia articles](https://dumps.wikimedia.org/enwiki/20230820/), using the first 10,000, 100,000, and 1,000,000 entries, in that order. I ran the following script to benchmark the changes against the current stable release: ```nu #!/usr/bin/nu # let shellbin = $"($env.HOME)/src/nushell/target/aarch64-linux-android/release/nu" let shellbin = "nu" const dbpath = 'enwiki-index.db' [10000, 100000, 1000000] | each {|rows| rm -f $dbpath; do { time -f '%M %e %U %S' $shellbin -c ( $"bzip2 -cdk ~/enwiki-20230820-pages-articles-multistream-index.txt.bz2 | head -n ($rows) | lines | parse '{offset}:{id}:{title}' | update cells -c [offset, id] { into int } | into sqlite ($dbpath)" ) } | complete | get stderr | str trim | parse '{rss_max} {real} {user} {kernel}' | update cells -c [rss_max] { $"($in)kb" | into filesize } | update cells -c [real, user, kernel] { $"($in)sec" | into duration } | insert rows $rows | roll right } | flatten | to nuon ``` This yields the following results Current stable release: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|53.6 MiB|770ms|460ms|420ms| |100000|209.6 MiB|6sec 940ms|3sec 740ms|4sec 380ms| |1000000|1.7 GiB|1min 8sec 810ms|38sec 690ms|42sec 550ms| This PR: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|38.2 MiB|780ms|440ms|410ms| |100000|39.8 MiB|6sec 450ms|3sec 530ms|4sec 160ms| |1000000|39.8 MiB|1min 3sec 230ms|37sec 440ms|40sec 180ms| # Note I started this branch kind of at the same time as my others, but I understand the feedback that smaller PRs are preferred. Let me know if it would be better to split this up. I do think the scope of the changes are on the bigger side even without the behavior changes I mentioned, so I'm not sure if that will help this particular PR very much, but I'm happy to oblige on request.
2024-01-16 04:41:25 +01:00
use std::{io::Write, path::PathBuf};
use chrono::{DateTime, FixedOffset};
use nu_protocol::{ast::PathMember, record, Span, Value};
Fix memory consumption of into sqlite (#10232) # Description Currently, the `into sqlite` command collects the entire input stream into a single Value, which soaks up the entire input into memory, before it ever tries to write anything to the DB. This is very problematic for large inputs; for example, I tried transforming a multi-gigabyte CSV file into SQLite, and before I knew what was happening, my system's memory was completely exhausted, and I had to hard reboot to recover. This PR fixes this problem by working directly with the pipeline stream, inserting into the DB as values are read from the stream. In order to facilitate working with the stream directly, I introduced a new `Table` struct to store the connection and a few configuration parameters, as well as to make it easier to lazily create the table on the first read value. In addition to the purely functional fixes, a few other changes were made to the serialization and user facing behavior. ### Serialization Much of the preexisting code was focused on generating the exact text needed for a SQL statement. This is unneeded and less safe than using the `rusqlite` crate's serialization for native Rust types along with prepared statements. ### User-Facing Changes Currently, the command is very liberal in the input types it accepts. The strategy is basically if it is a record, try to follow its structure and make an analogous SQL row, which is pretty reasonable. However, when it's not a record, it basically tries to guess what the user wanted and just makes a single column table and serializes the value into that one column, whatever type it may be. This has been changed so that it only accepts records as input. If the user wants to serialize non-record types into SQL, then they must explicitly opt into doing this by constructing a record or table with it first. For a utility for inserting data into SQL, I think it makes more sense to let the user choose how to convert their data, rather than make a choice for them that may surprise them. However, I understand this may be a controversial change. If the maintainers don't agree, I can change this back. #### Long switch names The `file_name` and `table_name` long form switches are currently snake_case and expect to be as such at the command line. These have been changed to kebab-case to be more conventional. # Tests + Formatting To test the memory consumption, I used [this publicly available index of all Wikipedia articles](https://dumps.wikimedia.org/enwiki/20230820/), using the first 10,000, 100,000, and 1,000,000 entries, in that order. I ran the following script to benchmark the changes against the current stable release: ```nu #!/usr/bin/nu # let shellbin = $"($env.HOME)/src/nushell/target/aarch64-linux-android/release/nu" let shellbin = "nu" const dbpath = 'enwiki-index.db' [10000, 100000, 1000000] | each {|rows| rm -f $dbpath; do { time -f '%M %e %U %S' $shellbin -c ( $"bzip2 -cdk ~/enwiki-20230820-pages-articles-multistream-index.txt.bz2 | head -n ($rows) | lines | parse '{offset}:{id}:{title}' | update cells -c [offset, id] { into int } | into sqlite ($dbpath)" ) } | complete | get stderr | str trim | parse '{rss_max} {real} {user} {kernel}' | update cells -c [rss_max] { $"($in)kb" | into filesize } | update cells -c [real, user, kernel] { $"($in)sec" | into duration } | insert rows $rows | roll right } | flatten | to nuon ``` This yields the following results Current stable release: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|53.6 MiB|770ms|460ms|420ms| |100000|209.6 MiB|6sec 940ms|3sec 740ms|4sec 380ms| |1000000|1.7 GiB|1min 8sec 810ms|38sec 690ms|42sec 550ms| This PR: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|38.2 MiB|780ms|440ms|410ms| |100000|39.8 MiB|6sec 450ms|3sec 530ms|4sec 160ms| |1000000|39.8 MiB|1min 3sec 230ms|37sec 440ms|40sec 180ms| # Note I started this branch kind of at the same time as my others, but I understand the feedback that smaller PRs are preferred. Let me know if it would be better to split this up. I do think the scope of the changes are on the bigger side even without the behavior changes I mentioned, so I'm not sure if that will help this particular PR very much, but I'm happy to oblige on request.
2024-01-16 04:41:25 +01:00
use nu_test_support::{
fs::{line_ending, Stub},
nu, pipeline,
playground::{Dirs, Playground},
};
use rand::{
distributions::{Alphanumeric, DistString, Standard},
prelude::Distribution,
rngs::StdRng,
Rng, SeedableRng,
};
#[test]
fn into_sqlite_schema() {
Playground::setup("schema", |dirs, _| {
let testdb = make_sqlite_db(
&dirs,
r#"[
[somebool, someint, somefloat, somefilesize, someduration, somedate, somestring, somebinary];
[true, 1, 2.0, 1kb, 1sec, "2023-09-10 11:30:00", "foo", ("binary" | into binary)],
[false, 2, 3.0, 2mb, 4wk, "2020-09-10 12:30:00", "bar", ("wut" | into binary)],
]"#,
);
let conn = rusqlite::Connection::open(testdb).unwrap();
let mut stmt = conn.prepare("SELECT * FROM pragma_table_info(?1)").unwrap();
let actual_rows: Vec<_> = stmt
.query_and_then(["main"], |row| -> rusqlite::Result<_, rusqlite::Error> {
let name: String = row.get("name").unwrap();
let col_type: String = row.get("type").unwrap();
Ok((name, col_type))
})
.unwrap()
.map(|row| row.unwrap())
.collect();
let expected_rows = vec![
("somebool".into(), "BOOLEAN".into()),
("someint".into(), "INTEGER".into()),
("somefloat".into(), "REAL".into()),
("somefilesize".into(), "INTEGER".into()),
("someduration".into(), "BIGINT".into()),
("somedate".into(), "TEXT".into()),
("somestring".into(), "TEXT".into()),
("somebinary".into(), "BLOB".into()),
];
assert_eq!(expected_rows, actual_rows);
});
}
#[test]
fn into_sqlite_values() {
Playground::setup("values", |dirs, _| {
insert_test_rows(
&dirs,
r#"[
[somebool, someint, somefloat, somefilesize, someduration, somedate, somestring, somebinary, somenull];
[true, 1, 2.0, 1kb, 1sec, "2023-09-10T11:30:00-00:00", "foo", ("binary" | into binary), 1],
[false, 2, 3.0, 2mb, 4wk, "2020-09-10T12:30:00-00:00", "bar", ("wut" | into binary), null],
]"#,
None,
vec![
TestRow(
true,
1,
2.0,
1000,
1000000000,
DateTime::parse_from_rfc3339("2023-09-10T11:30:00-00:00").unwrap(),
"foo".into(),
b"binary".to_vec(),
rusqlite::types::Value::Integer(1),
),
TestRow(
false,
2,
3.0,
2000000,
2419200000000000,
DateTime::parse_from_rfc3339("2020-09-10T12:30:00-00:00").unwrap(),
"bar".into(),
b"wut".to_vec(),
rusqlite::types::Value::Null,
),
],
);
});
}
/// When we create a new table, we use the first row to infer the schema of the
/// table. In the event that a column is null, we can't know what type the row
/// should be, so we just assume TEXT.
#[test]
fn into_sqlite_values_first_column_null() {
Playground::setup("values", |dirs, _| {
insert_test_rows(
&dirs,
r#"[
[somebool, someint, somefloat, somefilesize, someduration, somedate, somestring, somebinary, somenull];
[false, 2, 3.0, 2mb, 4wk, "2020-09-10T12:30:00-00:00", "bar", ("wut" | into binary), null],
[true, 1, 2.0, 1kb, 1sec, "2023-09-10T11:30:00-00:00", "foo", ("binary" | into binary), 1],
]"#,
None,
vec![
TestRow(
false,
2,
3.0,
2000000,
2419200000000000,
DateTime::parse_from_rfc3339("2020-09-10T12:30:00-00:00").unwrap(),
"bar".into(),
b"wut".to_vec(),
rusqlite::types::Value::Null,
),
TestRow(
true,
1,
2.0,
1000,
1000000000,
DateTime::parse_from_rfc3339("2023-09-10T11:30:00-00:00").unwrap(),
"foo".into(),
b"binary".to_vec(),
rusqlite::types::Value::Text("1".into()),
),
],
);
});
}
/// If the DB / table already exist, then the insert should end up with the
/// right data types no matter if the first row is null or not.
#[test]
fn into_sqlite_values_first_column_null_preexisting_db() {
Playground::setup("values", |dirs, _| {
insert_test_rows(
&dirs,
r#"[
[somebool, someint, somefloat, somefilesize, someduration, somedate, somestring, somebinary, somenull];
[true, 1, 2.0, 1kb, 1sec, "2023-09-10T11:30:00-00:00", "foo", ("binary" | into binary), 1],
[false, 2, 3.0, 2mb, 4wk, "2020-09-10T12:30:00-00:00", "bar", ("wut" | into binary), null],
Fix memory consumption of into sqlite (#10232) # Description Currently, the `into sqlite` command collects the entire input stream into a single Value, which soaks up the entire input into memory, before it ever tries to write anything to the DB. This is very problematic for large inputs; for example, I tried transforming a multi-gigabyte CSV file into SQLite, and before I knew what was happening, my system's memory was completely exhausted, and I had to hard reboot to recover. This PR fixes this problem by working directly with the pipeline stream, inserting into the DB as values are read from the stream. In order to facilitate working with the stream directly, I introduced a new `Table` struct to store the connection and a few configuration parameters, as well as to make it easier to lazily create the table on the first read value. In addition to the purely functional fixes, a few other changes were made to the serialization and user facing behavior. ### Serialization Much of the preexisting code was focused on generating the exact text needed for a SQL statement. This is unneeded and less safe than using the `rusqlite` crate's serialization for native Rust types along with prepared statements. ### User-Facing Changes Currently, the command is very liberal in the input types it accepts. The strategy is basically if it is a record, try to follow its structure and make an analogous SQL row, which is pretty reasonable. However, when it's not a record, it basically tries to guess what the user wanted and just makes a single column table and serializes the value into that one column, whatever type it may be. This has been changed so that it only accepts records as input. If the user wants to serialize non-record types into SQL, then they must explicitly opt into doing this by constructing a record or table with it first. For a utility for inserting data into SQL, I think it makes more sense to let the user choose how to convert their data, rather than make a choice for them that may surprise them. However, I understand this may be a controversial change. If the maintainers don't agree, I can change this back. #### Long switch names The `file_name` and `table_name` long form switches are currently snake_case and expect to be as such at the command line. These have been changed to kebab-case to be more conventional. # Tests + Formatting To test the memory consumption, I used [this publicly available index of all Wikipedia articles](https://dumps.wikimedia.org/enwiki/20230820/), using the first 10,000, 100,000, and 1,000,000 entries, in that order. I ran the following script to benchmark the changes against the current stable release: ```nu #!/usr/bin/nu # let shellbin = $"($env.HOME)/src/nushell/target/aarch64-linux-android/release/nu" let shellbin = "nu" const dbpath = 'enwiki-index.db' [10000, 100000, 1000000] | each {|rows| rm -f $dbpath; do { time -f '%M %e %U %S' $shellbin -c ( $"bzip2 -cdk ~/enwiki-20230820-pages-articles-multistream-index.txt.bz2 | head -n ($rows) | lines | parse '{offset}:{id}:{title}' | update cells -c [offset, id] { into int } | into sqlite ($dbpath)" ) } | complete | get stderr | str trim | parse '{rss_max} {real} {user} {kernel}' | update cells -c [rss_max] { $"($in)kb" | into filesize } | update cells -c [real, user, kernel] { $"($in)sec" | into duration } | insert rows $rows | roll right } | flatten | to nuon ``` This yields the following results Current stable release: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|53.6 MiB|770ms|460ms|420ms| |100000|209.6 MiB|6sec 940ms|3sec 740ms|4sec 380ms| |1000000|1.7 GiB|1min 8sec 810ms|38sec 690ms|42sec 550ms| This PR: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|38.2 MiB|780ms|440ms|410ms| |100000|39.8 MiB|6sec 450ms|3sec 530ms|4sec 160ms| |1000000|39.8 MiB|1min 3sec 230ms|37sec 440ms|40sec 180ms| # Note I started this branch kind of at the same time as my others, but I understand the feedback that smaller PRs are preferred. Let me know if it would be better to split this up. I do think the scope of the changes are on the bigger side even without the behavior changes I mentioned, so I'm not sure if that will help this particular PR very much, but I'm happy to oblige on request.
2024-01-16 04:41:25 +01:00
]"#,
None,
vec![
TestRow(
true,
1,
2.0,
1000,
1000000000,
DateTime::parse_from_rfc3339("2023-09-10T11:30:00-00:00").unwrap(),
"foo".into(),
b"binary".to_vec(),
rusqlite::types::Value::Integer(1),
Fix memory consumption of into sqlite (#10232) # Description Currently, the `into sqlite` command collects the entire input stream into a single Value, which soaks up the entire input into memory, before it ever tries to write anything to the DB. This is very problematic for large inputs; for example, I tried transforming a multi-gigabyte CSV file into SQLite, and before I knew what was happening, my system's memory was completely exhausted, and I had to hard reboot to recover. This PR fixes this problem by working directly with the pipeline stream, inserting into the DB as values are read from the stream. In order to facilitate working with the stream directly, I introduced a new `Table` struct to store the connection and a few configuration parameters, as well as to make it easier to lazily create the table on the first read value. In addition to the purely functional fixes, a few other changes were made to the serialization and user facing behavior. ### Serialization Much of the preexisting code was focused on generating the exact text needed for a SQL statement. This is unneeded and less safe than using the `rusqlite` crate's serialization for native Rust types along with prepared statements. ### User-Facing Changes Currently, the command is very liberal in the input types it accepts. The strategy is basically if it is a record, try to follow its structure and make an analogous SQL row, which is pretty reasonable. However, when it's not a record, it basically tries to guess what the user wanted and just makes a single column table and serializes the value into that one column, whatever type it may be. This has been changed so that it only accepts records as input. If the user wants to serialize non-record types into SQL, then they must explicitly opt into doing this by constructing a record or table with it first. For a utility for inserting data into SQL, I think it makes more sense to let the user choose how to convert their data, rather than make a choice for them that may surprise them. However, I understand this may be a controversial change. If the maintainers don't agree, I can change this back. #### Long switch names The `file_name` and `table_name` long form switches are currently snake_case and expect to be as such at the command line. These have been changed to kebab-case to be more conventional. # Tests + Formatting To test the memory consumption, I used [this publicly available index of all Wikipedia articles](https://dumps.wikimedia.org/enwiki/20230820/), using the first 10,000, 100,000, and 1,000,000 entries, in that order. I ran the following script to benchmark the changes against the current stable release: ```nu #!/usr/bin/nu # let shellbin = $"($env.HOME)/src/nushell/target/aarch64-linux-android/release/nu" let shellbin = "nu" const dbpath = 'enwiki-index.db' [10000, 100000, 1000000] | each {|rows| rm -f $dbpath; do { time -f '%M %e %U %S' $shellbin -c ( $"bzip2 -cdk ~/enwiki-20230820-pages-articles-multistream-index.txt.bz2 | head -n ($rows) | lines | parse '{offset}:{id}:{title}' | update cells -c [offset, id] { into int } | into sqlite ($dbpath)" ) } | complete | get stderr | str trim | parse '{rss_max} {real} {user} {kernel}' | update cells -c [rss_max] { $"($in)kb" | into filesize } | update cells -c [real, user, kernel] { $"($in)sec" | into duration } | insert rows $rows | roll right } | flatten | to nuon ``` This yields the following results Current stable release: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|53.6 MiB|770ms|460ms|420ms| |100000|209.6 MiB|6sec 940ms|3sec 740ms|4sec 380ms| |1000000|1.7 GiB|1min 8sec 810ms|38sec 690ms|42sec 550ms| This PR: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|38.2 MiB|780ms|440ms|410ms| |100000|39.8 MiB|6sec 450ms|3sec 530ms|4sec 160ms| |1000000|39.8 MiB|1min 3sec 230ms|37sec 440ms|40sec 180ms| # Note I started this branch kind of at the same time as my others, but I understand the feedback that smaller PRs are preferred. Let me know if it would be better to split this up. I do think the scope of the changes are on the bigger side even without the behavior changes I mentioned, so I'm not sure if that will help this particular PR very much, but I'm happy to oblige on request.
2024-01-16 04:41:25 +01:00
),
TestRow(
false,
2,
3.0,
2000000,
2419200000000000,
DateTime::parse_from_rfc3339("2020-09-10T12:30:00-00:00").unwrap(),
"bar".into(),
b"wut".to_vec(),
rusqlite::types::Value::Null,
),
],
);
insert_test_rows(
&dirs,
r#"[
[somebool, someint, somefloat, somefilesize, someduration, somedate, somestring, somebinary, somenull];
[true, 3, 5.0, 3.1mb, 1wk, "2020-09-10T12:30:00-00:00", "baz", ("huh" | into binary), null],
[true, 3, 5.0, 3.1mb, 1wk, "2020-09-10T12:30:00-00:00", "baz", ("huh" | into binary), 3],
]"#,
None,
vec![
TestRow(
true,
1,
2.0,
1000,
1000000000,
DateTime::parse_from_rfc3339("2023-09-10T11:30:00-00:00").unwrap(),
"foo".into(),
b"binary".to_vec(),
rusqlite::types::Value::Integer(1),
),
TestRow(
false,
2,
3.0,
2000000,
2419200000000000,
DateTime::parse_from_rfc3339("2020-09-10T12:30:00-00:00").unwrap(),
"bar".into(),
b"wut".to_vec(),
rusqlite::types::Value::Null,
),
TestRow(
true,
3,
5.0,
3100000,
604800000000000,
DateTime::parse_from_rfc3339("2020-09-10T12:30:00-00:00").unwrap(),
"baz".into(),
b"huh".to_vec(),
rusqlite::types::Value::Null,
),
TestRow(
true,
3,
5.0,
3100000,
604800000000000,
DateTime::parse_from_rfc3339("2020-09-10T12:30:00-00:00").unwrap(),
"baz".into(),
b"huh".to_vec(),
rusqlite::types::Value::Integer(3),
Fix memory consumption of into sqlite (#10232) # Description Currently, the `into sqlite` command collects the entire input stream into a single Value, which soaks up the entire input into memory, before it ever tries to write anything to the DB. This is very problematic for large inputs; for example, I tried transforming a multi-gigabyte CSV file into SQLite, and before I knew what was happening, my system's memory was completely exhausted, and I had to hard reboot to recover. This PR fixes this problem by working directly with the pipeline stream, inserting into the DB as values are read from the stream. In order to facilitate working with the stream directly, I introduced a new `Table` struct to store the connection and a few configuration parameters, as well as to make it easier to lazily create the table on the first read value. In addition to the purely functional fixes, a few other changes were made to the serialization and user facing behavior. ### Serialization Much of the preexisting code was focused on generating the exact text needed for a SQL statement. This is unneeded and less safe than using the `rusqlite` crate's serialization for native Rust types along with prepared statements. ### User-Facing Changes Currently, the command is very liberal in the input types it accepts. The strategy is basically if it is a record, try to follow its structure and make an analogous SQL row, which is pretty reasonable. However, when it's not a record, it basically tries to guess what the user wanted and just makes a single column table and serializes the value into that one column, whatever type it may be. This has been changed so that it only accepts records as input. If the user wants to serialize non-record types into SQL, then they must explicitly opt into doing this by constructing a record or table with it first. For a utility for inserting data into SQL, I think it makes more sense to let the user choose how to convert their data, rather than make a choice for them that may surprise them. However, I understand this may be a controversial change. If the maintainers don't agree, I can change this back. #### Long switch names The `file_name` and `table_name` long form switches are currently snake_case and expect to be as such at the command line. These have been changed to kebab-case to be more conventional. # Tests + Formatting To test the memory consumption, I used [this publicly available index of all Wikipedia articles](https://dumps.wikimedia.org/enwiki/20230820/), using the first 10,000, 100,000, and 1,000,000 entries, in that order. I ran the following script to benchmark the changes against the current stable release: ```nu #!/usr/bin/nu # let shellbin = $"($env.HOME)/src/nushell/target/aarch64-linux-android/release/nu" let shellbin = "nu" const dbpath = 'enwiki-index.db' [10000, 100000, 1000000] | each {|rows| rm -f $dbpath; do { time -f '%M %e %U %S' $shellbin -c ( $"bzip2 -cdk ~/enwiki-20230820-pages-articles-multistream-index.txt.bz2 | head -n ($rows) | lines | parse '{offset}:{id}:{title}' | update cells -c [offset, id] { into int } | into sqlite ($dbpath)" ) } | complete | get stderr | str trim | parse '{rss_max} {real} {user} {kernel}' | update cells -c [rss_max] { $"($in)kb" | into filesize } | update cells -c [real, user, kernel] { $"($in)sec" | into duration } | insert rows $rows | roll right } | flatten | to nuon ``` This yields the following results Current stable release: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|53.6 MiB|770ms|460ms|420ms| |100000|209.6 MiB|6sec 940ms|3sec 740ms|4sec 380ms| |1000000|1.7 GiB|1min 8sec 810ms|38sec 690ms|42sec 550ms| This PR: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|38.2 MiB|780ms|440ms|410ms| |100000|39.8 MiB|6sec 450ms|3sec 530ms|4sec 160ms| |1000000|39.8 MiB|1min 3sec 230ms|37sec 440ms|40sec 180ms| # Note I started this branch kind of at the same time as my others, but I understand the feedback that smaller PRs are preferred. Let me know if it would be better to split this up. I do think the scope of the changes are on the bigger side even without the behavior changes I mentioned, so I'm not sure if that will help this particular PR very much, but I'm happy to oblige on request.
2024-01-16 04:41:25 +01:00
),
],
);
});
}
/// Opening a preexisting database should append to it
#[test]
fn into_sqlite_existing_db_append() {
Playground::setup("existing_db_append", |dirs, _| {
// create a new DB with only one row
insert_test_rows(
&dirs,
r#"[
[somebool, someint, somefloat, somefilesize, someduration, somedate, somestring, somebinary, somenull];
[true, 1, 2.0, 1kb, 1sec, "2023-09-10T11:30:00-00:00", "foo", ("binary" | into binary), null],
Fix memory consumption of into sqlite (#10232) # Description Currently, the `into sqlite` command collects the entire input stream into a single Value, which soaks up the entire input into memory, before it ever tries to write anything to the DB. This is very problematic for large inputs; for example, I tried transforming a multi-gigabyte CSV file into SQLite, and before I knew what was happening, my system's memory was completely exhausted, and I had to hard reboot to recover. This PR fixes this problem by working directly with the pipeline stream, inserting into the DB as values are read from the stream. In order to facilitate working with the stream directly, I introduced a new `Table` struct to store the connection and a few configuration parameters, as well as to make it easier to lazily create the table on the first read value. In addition to the purely functional fixes, a few other changes were made to the serialization and user facing behavior. ### Serialization Much of the preexisting code was focused on generating the exact text needed for a SQL statement. This is unneeded and less safe than using the `rusqlite` crate's serialization for native Rust types along with prepared statements. ### User-Facing Changes Currently, the command is very liberal in the input types it accepts. The strategy is basically if it is a record, try to follow its structure and make an analogous SQL row, which is pretty reasonable. However, when it's not a record, it basically tries to guess what the user wanted and just makes a single column table and serializes the value into that one column, whatever type it may be. This has been changed so that it only accepts records as input. If the user wants to serialize non-record types into SQL, then they must explicitly opt into doing this by constructing a record or table with it first. For a utility for inserting data into SQL, I think it makes more sense to let the user choose how to convert their data, rather than make a choice for them that may surprise them. However, I understand this may be a controversial change. If the maintainers don't agree, I can change this back. #### Long switch names The `file_name` and `table_name` long form switches are currently snake_case and expect to be as such at the command line. These have been changed to kebab-case to be more conventional. # Tests + Formatting To test the memory consumption, I used [this publicly available index of all Wikipedia articles](https://dumps.wikimedia.org/enwiki/20230820/), using the first 10,000, 100,000, and 1,000,000 entries, in that order. I ran the following script to benchmark the changes against the current stable release: ```nu #!/usr/bin/nu # let shellbin = $"($env.HOME)/src/nushell/target/aarch64-linux-android/release/nu" let shellbin = "nu" const dbpath = 'enwiki-index.db' [10000, 100000, 1000000] | each {|rows| rm -f $dbpath; do { time -f '%M %e %U %S' $shellbin -c ( $"bzip2 -cdk ~/enwiki-20230820-pages-articles-multistream-index.txt.bz2 | head -n ($rows) | lines | parse '{offset}:{id}:{title}' | update cells -c [offset, id] { into int } | into sqlite ($dbpath)" ) } | complete | get stderr | str trim | parse '{rss_max} {real} {user} {kernel}' | update cells -c [rss_max] { $"($in)kb" | into filesize } | update cells -c [real, user, kernel] { $"($in)sec" | into duration } | insert rows $rows | roll right } | flatten | to nuon ``` This yields the following results Current stable release: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|53.6 MiB|770ms|460ms|420ms| |100000|209.6 MiB|6sec 940ms|3sec 740ms|4sec 380ms| |1000000|1.7 GiB|1min 8sec 810ms|38sec 690ms|42sec 550ms| This PR: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|38.2 MiB|780ms|440ms|410ms| |100000|39.8 MiB|6sec 450ms|3sec 530ms|4sec 160ms| |1000000|39.8 MiB|1min 3sec 230ms|37sec 440ms|40sec 180ms| # Note I started this branch kind of at the same time as my others, but I understand the feedback that smaller PRs are preferred. Let me know if it would be better to split this up. I do think the scope of the changes are on the bigger side even without the behavior changes I mentioned, so I'm not sure if that will help this particular PR very much, but I'm happy to oblige on request.
2024-01-16 04:41:25 +01:00
]"#,
None,
vec![TestRow(
true,
1,
2.0,
1000,
1000000000,
DateTime::parse_from_rfc3339("2023-09-10T11:30:00-00:00").unwrap(),
"foo".into(),
b"binary".to_vec(),
rusqlite::types::Value::Null,
Fix memory consumption of into sqlite (#10232) # Description Currently, the `into sqlite` command collects the entire input stream into a single Value, which soaks up the entire input into memory, before it ever tries to write anything to the DB. This is very problematic for large inputs; for example, I tried transforming a multi-gigabyte CSV file into SQLite, and before I knew what was happening, my system's memory was completely exhausted, and I had to hard reboot to recover. This PR fixes this problem by working directly with the pipeline stream, inserting into the DB as values are read from the stream. In order to facilitate working with the stream directly, I introduced a new `Table` struct to store the connection and a few configuration parameters, as well as to make it easier to lazily create the table on the first read value. In addition to the purely functional fixes, a few other changes were made to the serialization and user facing behavior. ### Serialization Much of the preexisting code was focused on generating the exact text needed for a SQL statement. This is unneeded and less safe than using the `rusqlite` crate's serialization for native Rust types along with prepared statements. ### User-Facing Changes Currently, the command is very liberal in the input types it accepts. The strategy is basically if it is a record, try to follow its structure and make an analogous SQL row, which is pretty reasonable. However, when it's not a record, it basically tries to guess what the user wanted and just makes a single column table and serializes the value into that one column, whatever type it may be. This has been changed so that it only accepts records as input. If the user wants to serialize non-record types into SQL, then they must explicitly opt into doing this by constructing a record or table with it first. For a utility for inserting data into SQL, I think it makes more sense to let the user choose how to convert their data, rather than make a choice for them that may surprise them. However, I understand this may be a controversial change. If the maintainers don't agree, I can change this back. #### Long switch names The `file_name` and `table_name` long form switches are currently snake_case and expect to be as such at the command line. These have been changed to kebab-case to be more conventional. # Tests + Formatting To test the memory consumption, I used [this publicly available index of all Wikipedia articles](https://dumps.wikimedia.org/enwiki/20230820/), using the first 10,000, 100,000, and 1,000,000 entries, in that order. I ran the following script to benchmark the changes against the current stable release: ```nu #!/usr/bin/nu # let shellbin = $"($env.HOME)/src/nushell/target/aarch64-linux-android/release/nu" let shellbin = "nu" const dbpath = 'enwiki-index.db' [10000, 100000, 1000000] | each {|rows| rm -f $dbpath; do { time -f '%M %e %U %S' $shellbin -c ( $"bzip2 -cdk ~/enwiki-20230820-pages-articles-multistream-index.txt.bz2 | head -n ($rows) | lines | parse '{offset}:{id}:{title}' | update cells -c [offset, id] { into int } | into sqlite ($dbpath)" ) } | complete | get stderr | str trim | parse '{rss_max} {real} {user} {kernel}' | update cells -c [rss_max] { $"($in)kb" | into filesize } | update cells -c [real, user, kernel] { $"($in)sec" | into duration } | insert rows $rows | roll right } | flatten | to nuon ``` This yields the following results Current stable release: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|53.6 MiB|770ms|460ms|420ms| |100000|209.6 MiB|6sec 940ms|3sec 740ms|4sec 380ms| |1000000|1.7 GiB|1min 8sec 810ms|38sec 690ms|42sec 550ms| This PR: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|38.2 MiB|780ms|440ms|410ms| |100000|39.8 MiB|6sec 450ms|3sec 530ms|4sec 160ms| |1000000|39.8 MiB|1min 3sec 230ms|37sec 440ms|40sec 180ms| # Note I started this branch kind of at the same time as my others, but I understand the feedback that smaller PRs are preferred. Let me know if it would be better to split this up. I do think the scope of the changes are on the bigger side even without the behavior changes I mentioned, so I'm not sure if that will help this particular PR very much, but I'm happy to oblige on request.
2024-01-16 04:41:25 +01:00
)],
);
// open the same DB again and write one row
insert_test_rows(
&dirs,
r#"[
[somebool, someint, somefloat, somefilesize, someduration, somedate, somestring, somebinary, somenull];
[false, 2, 3.0, 2mb, 4wk, "2020-09-10T12:30:00-00:00", "bar", ("wut" | into binary), null],
Fix memory consumption of into sqlite (#10232) # Description Currently, the `into sqlite` command collects the entire input stream into a single Value, which soaks up the entire input into memory, before it ever tries to write anything to the DB. This is very problematic for large inputs; for example, I tried transforming a multi-gigabyte CSV file into SQLite, and before I knew what was happening, my system's memory was completely exhausted, and I had to hard reboot to recover. This PR fixes this problem by working directly with the pipeline stream, inserting into the DB as values are read from the stream. In order to facilitate working with the stream directly, I introduced a new `Table` struct to store the connection and a few configuration parameters, as well as to make it easier to lazily create the table on the first read value. In addition to the purely functional fixes, a few other changes were made to the serialization and user facing behavior. ### Serialization Much of the preexisting code was focused on generating the exact text needed for a SQL statement. This is unneeded and less safe than using the `rusqlite` crate's serialization for native Rust types along with prepared statements. ### User-Facing Changes Currently, the command is very liberal in the input types it accepts. The strategy is basically if it is a record, try to follow its structure and make an analogous SQL row, which is pretty reasonable. However, when it's not a record, it basically tries to guess what the user wanted and just makes a single column table and serializes the value into that one column, whatever type it may be. This has been changed so that it only accepts records as input. If the user wants to serialize non-record types into SQL, then they must explicitly opt into doing this by constructing a record or table with it first. For a utility for inserting data into SQL, I think it makes more sense to let the user choose how to convert their data, rather than make a choice for them that may surprise them. However, I understand this may be a controversial change. If the maintainers don't agree, I can change this back. #### Long switch names The `file_name` and `table_name` long form switches are currently snake_case and expect to be as such at the command line. These have been changed to kebab-case to be more conventional. # Tests + Formatting To test the memory consumption, I used [this publicly available index of all Wikipedia articles](https://dumps.wikimedia.org/enwiki/20230820/), using the first 10,000, 100,000, and 1,000,000 entries, in that order. I ran the following script to benchmark the changes against the current stable release: ```nu #!/usr/bin/nu # let shellbin = $"($env.HOME)/src/nushell/target/aarch64-linux-android/release/nu" let shellbin = "nu" const dbpath = 'enwiki-index.db' [10000, 100000, 1000000] | each {|rows| rm -f $dbpath; do { time -f '%M %e %U %S' $shellbin -c ( $"bzip2 -cdk ~/enwiki-20230820-pages-articles-multistream-index.txt.bz2 | head -n ($rows) | lines | parse '{offset}:{id}:{title}' | update cells -c [offset, id] { into int } | into sqlite ($dbpath)" ) } | complete | get stderr | str trim | parse '{rss_max} {real} {user} {kernel}' | update cells -c [rss_max] { $"($in)kb" | into filesize } | update cells -c [real, user, kernel] { $"($in)sec" | into duration } | insert rows $rows | roll right } | flatten | to nuon ``` This yields the following results Current stable release: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|53.6 MiB|770ms|460ms|420ms| |100000|209.6 MiB|6sec 940ms|3sec 740ms|4sec 380ms| |1000000|1.7 GiB|1min 8sec 810ms|38sec 690ms|42sec 550ms| This PR: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|38.2 MiB|780ms|440ms|410ms| |100000|39.8 MiB|6sec 450ms|3sec 530ms|4sec 160ms| |1000000|39.8 MiB|1min 3sec 230ms|37sec 440ms|40sec 180ms| # Note I started this branch kind of at the same time as my others, but I understand the feedback that smaller PRs are preferred. Let me know if it would be better to split this up. I do think the scope of the changes are on the bigger side even without the behavior changes I mentioned, so I'm not sure if that will help this particular PR very much, but I'm happy to oblige on request.
2024-01-16 04:41:25 +01:00
]"#,
None,
// it should have both rows
vec![
TestRow(
true,
1,
2.0,
1000,
1000000000,
DateTime::parse_from_rfc3339("2023-09-10T11:30:00-00:00").unwrap(),
"foo".into(),
b"binary".to_vec(),
rusqlite::types::Value::Null,
Fix memory consumption of into sqlite (#10232) # Description Currently, the `into sqlite` command collects the entire input stream into a single Value, which soaks up the entire input into memory, before it ever tries to write anything to the DB. This is very problematic for large inputs; for example, I tried transforming a multi-gigabyte CSV file into SQLite, and before I knew what was happening, my system's memory was completely exhausted, and I had to hard reboot to recover. This PR fixes this problem by working directly with the pipeline stream, inserting into the DB as values are read from the stream. In order to facilitate working with the stream directly, I introduced a new `Table` struct to store the connection and a few configuration parameters, as well as to make it easier to lazily create the table on the first read value. In addition to the purely functional fixes, a few other changes were made to the serialization and user facing behavior. ### Serialization Much of the preexisting code was focused on generating the exact text needed for a SQL statement. This is unneeded and less safe than using the `rusqlite` crate's serialization for native Rust types along with prepared statements. ### User-Facing Changes Currently, the command is very liberal in the input types it accepts. The strategy is basically if it is a record, try to follow its structure and make an analogous SQL row, which is pretty reasonable. However, when it's not a record, it basically tries to guess what the user wanted and just makes a single column table and serializes the value into that one column, whatever type it may be. This has been changed so that it only accepts records as input. If the user wants to serialize non-record types into SQL, then they must explicitly opt into doing this by constructing a record or table with it first. For a utility for inserting data into SQL, I think it makes more sense to let the user choose how to convert their data, rather than make a choice for them that may surprise them. However, I understand this may be a controversial change. If the maintainers don't agree, I can change this back. #### Long switch names The `file_name` and `table_name` long form switches are currently snake_case and expect to be as such at the command line. These have been changed to kebab-case to be more conventional. # Tests + Formatting To test the memory consumption, I used [this publicly available index of all Wikipedia articles](https://dumps.wikimedia.org/enwiki/20230820/), using the first 10,000, 100,000, and 1,000,000 entries, in that order. I ran the following script to benchmark the changes against the current stable release: ```nu #!/usr/bin/nu # let shellbin = $"($env.HOME)/src/nushell/target/aarch64-linux-android/release/nu" let shellbin = "nu" const dbpath = 'enwiki-index.db' [10000, 100000, 1000000] | each {|rows| rm -f $dbpath; do { time -f '%M %e %U %S' $shellbin -c ( $"bzip2 -cdk ~/enwiki-20230820-pages-articles-multistream-index.txt.bz2 | head -n ($rows) | lines | parse '{offset}:{id}:{title}' | update cells -c [offset, id] { into int } | into sqlite ($dbpath)" ) } | complete | get stderr | str trim | parse '{rss_max} {real} {user} {kernel}' | update cells -c [rss_max] { $"($in)kb" | into filesize } | update cells -c [real, user, kernel] { $"($in)sec" | into duration } | insert rows $rows | roll right } | flatten | to nuon ``` This yields the following results Current stable release: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|53.6 MiB|770ms|460ms|420ms| |100000|209.6 MiB|6sec 940ms|3sec 740ms|4sec 380ms| |1000000|1.7 GiB|1min 8sec 810ms|38sec 690ms|42sec 550ms| This PR: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|38.2 MiB|780ms|440ms|410ms| |100000|39.8 MiB|6sec 450ms|3sec 530ms|4sec 160ms| |1000000|39.8 MiB|1min 3sec 230ms|37sec 440ms|40sec 180ms| # Note I started this branch kind of at the same time as my others, but I understand the feedback that smaller PRs are preferred. Let me know if it would be better to split this up. I do think the scope of the changes are on the bigger side even without the behavior changes I mentioned, so I'm not sure if that will help this particular PR very much, but I'm happy to oblige on request.
2024-01-16 04:41:25 +01:00
),
TestRow(
false,
2,
3.0,
2000000,
2419200000000000,
DateTime::parse_from_rfc3339("2020-09-10T12:30:00-00:00").unwrap(),
"bar".into(),
b"wut".to_vec(),
rusqlite::types::Value::Null,
Fix memory consumption of into sqlite (#10232) # Description Currently, the `into sqlite` command collects the entire input stream into a single Value, which soaks up the entire input into memory, before it ever tries to write anything to the DB. This is very problematic for large inputs; for example, I tried transforming a multi-gigabyte CSV file into SQLite, and before I knew what was happening, my system's memory was completely exhausted, and I had to hard reboot to recover. This PR fixes this problem by working directly with the pipeline stream, inserting into the DB as values are read from the stream. In order to facilitate working with the stream directly, I introduced a new `Table` struct to store the connection and a few configuration parameters, as well as to make it easier to lazily create the table on the first read value. In addition to the purely functional fixes, a few other changes were made to the serialization and user facing behavior. ### Serialization Much of the preexisting code was focused on generating the exact text needed for a SQL statement. This is unneeded and less safe than using the `rusqlite` crate's serialization for native Rust types along with prepared statements. ### User-Facing Changes Currently, the command is very liberal in the input types it accepts. The strategy is basically if it is a record, try to follow its structure and make an analogous SQL row, which is pretty reasonable. However, when it's not a record, it basically tries to guess what the user wanted and just makes a single column table and serializes the value into that one column, whatever type it may be. This has been changed so that it only accepts records as input. If the user wants to serialize non-record types into SQL, then they must explicitly opt into doing this by constructing a record or table with it first. For a utility for inserting data into SQL, I think it makes more sense to let the user choose how to convert their data, rather than make a choice for them that may surprise them. However, I understand this may be a controversial change. If the maintainers don't agree, I can change this back. #### Long switch names The `file_name` and `table_name` long form switches are currently snake_case and expect to be as such at the command line. These have been changed to kebab-case to be more conventional. # Tests + Formatting To test the memory consumption, I used [this publicly available index of all Wikipedia articles](https://dumps.wikimedia.org/enwiki/20230820/), using the first 10,000, 100,000, and 1,000,000 entries, in that order. I ran the following script to benchmark the changes against the current stable release: ```nu #!/usr/bin/nu # let shellbin = $"($env.HOME)/src/nushell/target/aarch64-linux-android/release/nu" let shellbin = "nu" const dbpath = 'enwiki-index.db' [10000, 100000, 1000000] | each {|rows| rm -f $dbpath; do { time -f '%M %e %U %S' $shellbin -c ( $"bzip2 -cdk ~/enwiki-20230820-pages-articles-multistream-index.txt.bz2 | head -n ($rows) | lines | parse '{offset}:{id}:{title}' | update cells -c [offset, id] { into int } | into sqlite ($dbpath)" ) } | complete | get stderr | str trim | parse '{rss_max} {real} {user} {kernel}' | update cells -c [rss_max] { $"($in)kb" | into filesize } | update cells -c [real, user, kernel] { $"($in)sec" | into duration } | insert rows $rows | roll right } | flatten | to nuon ``` This yields the following results Current stable release: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|53.6 MiB|770ms|460ms|420ms| |100000|209.6 MiB|6sec 940ms|3sec 740ms|4sec 380ms| |1000000|1.7 GiB|1min 8sec 810ms|38sec 690ms|42sec 550ms| This PR: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|38.2 MiB|780ms|440ms|410ms| |100000|39.8 MiB|6sec 450ms|3sec 530ms|4sec 160ms| |1000000|39.8 MiB|1min 3sec 230ms|37sec 440ms|40sec 180ms| # Note I started this branch kind of at the same time as my others, but I understand the feedback that smaller PRs are preferred. Let me know if it would be better to split this up. I do think the scope of the changes are on the bigger side even without the behavior changes I mentioned, so I'm not sure if that will help this particular PR very much, but I'm happy to oblige on request.
2024-01-16 04:41:25 +01:00
),
],
);
});
}
/// Test inserting a good number of randomly generated rows to test an actual
/// streaming pipeline instead of a simple value
#[test]
fn into_sqlite_big_insert() {
Playground::setup("big_insert", |dirs, playground| {
const NUM_ROWS: usize = 10_000;
const NUON_FILE_NAME: &str = "data.nuon";
let nuon_path = dirs.test().join(NUON_FILE_NAME);
playground.with_files(vec![Stub::EmptyFile(&nuon_path.to_string_lossy())]);
let mut expected_rows = Vec::new();
let mut nuon_file = std::fs::OpenOptions::new()
.write(true)
.open(&nuon_path)
.unwrap();
// write the header
for row in std::iter::repeat_with(TestRow::random).take(NUM_ROWS) {
let mut value: Value = row.clone().into();
// HACK: Convert to a string to get around this: https://github.com/nushell/nushell/issues/9186
value
.upsert_cell_path(
&[PathMember::String {
val: "somedate".into(),
span: Span::unknown(),
optional: false,
}],
Name the `Value` conversion functions more clearly (#11851) # Description This PR renames the conversion functions on `Value` to be more consistent. It follows the Rust [API guidelines](https://rust-lang.github.io/api-guidelines/naming.html#ad-hoc-conversions-follow-as_-to_-into_-conventions-c-conv) for ad-hoc conversions. The conversion functions on `Value` now come in a few forms: - `coerce_{type}` takes a `&Value` and attempts to convert the value to `type` (e.g., `i64` are converted to `f64`). This is the old behavior of some of the `as_{type}` functions -- these functions have simply been renamed to better reflect what they do. - The new `as_{type}` functions take a `&Value` and returns an `Ok` result only if the value is of `type` (no conversion is attempted). The returned value will be borrowed if `type` is non-`Copy`, otherwise an owned value is returned. - `into_{type}` exists for non-`Copy` types, but otherwise does not attempt conversion just like `as_type`. It takes an owned `Value` and always returns an owned result. - `coerce_into_{type}` has the same relationship with `coerce_{type}` as `into_{type}` does with `as_{type}`. - `to_{kind}_string`: conversion to different string formats (debug, abbreviated, etc.). Only two of the old string conversion functions were removed, the rest have been renamed only. - `to_{type}`: other conversion functions. Currently, only `to_path` exists. (And `to_string` through `Display`.) This table summaries the above: | Form | Cost | Input Ownership | Output Ownership | Converts `Value` case/`type` | | ---------------------------- | ----- | --------------- | ---------------- | -------- | | `as_{type}` | Cheap | Borrowed | Borrowed/Owned | No | | `into_{type}` | Cheap | Owned | Owned | No | | `coerce_{type}` | Cheap | Borrowed | Borrowed/Owned | Yes | | `coerce_into_{type}` | Cheap | Owned | Owned | Yes | | `to_{kind}_string` | Expensive | Borrowed | Owned | Yes | | `to_{type}` | Expensive | Borrowed | Owned | Yes | # User-Facing Changes Breaking API change for `Value` in `nu-protocol` which is exposed as part of the plugin API.
2024-02-17 19:14:16 +01:00
Box::new(|dateval| {
Value::string(dateval.coerce_string().unwrap(), dateval.span())
}),
Fix memory consumption of into sqlite (#10232) # Description Currently, the `into sqlite` command collects the entire input stream into a single Value, which soaks up the entire input into memory, before it ever tries to write anything to the DB. This is very problematic for large inputs; for example, I tried transforming a multi-gigabyte CSV file into SQLite, and before I knew what was happening, my system's memory was completely exhausted, and I had to hard reboot to recover. This PR fixes this problem by working directly with the pipeline stream, inserting into the DB as values are read from the stream. In order to facilitate working with the stream directly, I introduced a new `Table` struct to store the connection and a few configuration parameters, as well as to make it easier to lazily create the table on the first read value. In addition to the purely functional fixes, a few other changes were made to the serialization and user facing behavior. ### Serialization Much of the preexisting code was focused on generating the exact text needed for a SQL statement. This is unneeded and less safe than using the `rusqlite` crate's serialization for native Rust types along with prepared statements. ### User-Facing Changes Currently, the command is very liberal in the input types it accepts. The strategy is basically if it is a record, try to follow its structure and make an analogous SQL row, which is pretty reasonable. However, when it's not a record, it basically tries to guess what the user wanted and just makes a single column table and serializes the value into that one column, whatever type it may be. This has been changed so that it only accepts records as input. If the user wants to serialize non-record types into SQL, then they must explicitly opt into doing this by constructing a record or table with it first. For a utility for inserting data into SQL, I think it makes more sense to let the user choose how to convert their data, rather than make a choice for them that may surprise them. However, I understand this may be a controversial change. If the maintainers don't agree, I can change this back. #### Long switch names The `file_name` and `table_name` long form switches are currently snake_case and expect to be as such at the command line. These have been changed to kebab-case to be more conventional. # Tests + Formatting To test the memory consumption, I used [this publicly available index of all Wikipedia articles](https://dumps.wikimedia.org/enwiki/20230820/), using the first 10,000, 100,000, and 1,000,000 entries, in that order. I ran the following script to benchmark the changes against the current stable release: ```nu #!/usr/bin/nu # let shellbin = $"($env.HOME)/src/nushell/target/aarch64-linux-android/release/nu" let shellbin = "nu" const dbpath = 'enwiki-index.db' [10000, 100000, 1000000] | each {|rows| rm -f $dbpath; do { time -f '%M %e %U %S' $shellbin -c ( $"bzip2 -cdk ~/enwiki-20230820-pages-articles-multistream-index.txt.bz2 | head -n ($rows) | lines | parse '{offset}:{id}:{title}' | update cells -c [offset, id] { into int } | into sqlite ($dbpath)" ) } | complete | get stderr | str trim | parse '{rss_max} {real} {user} {kernel}' | update cells -c [rss_max] { $"($in)kb" | into filesize } | update cells -c [real, user, kernel] { $"($in)sec" | into duration } | insert rows $rows | roll right } | flatten | to nuon ``` This yields the following results Current stable release: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|53.6 MiB|770ms|460ms|420ms| |100000|209.6 MiB|6sec 940ms|3sec 740ms|4sec 380ms| |1000000|1.7 GiB|1min 8sec 810ms|38sec 690ms|42sec 550ms| This PR: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|38.2 MiB|780ms|440ms|410ms| |100000|39.8 MiB|6sec 450ms|3sec 530ms|4sec 160ms| |1000000|39.8 MiB|1min 3sec 230ms|37sec 440ms|40sec 180ms| # Note I started this branch kind of at the same time as my others, but I understand the feedback that smaller PRs are preferred. Let me know if it would be better to split this up. I do think the scope of the changes are on the bigger side even without the behavior changes I mentioned, so I'm not sure if that will help this particular PR very much, but I'm happy to oblige on request.
2024-01-16 04:41:25 +01:00
)
.unwrap();
create `nuon` crate from `from nuon` and `to nuon` (#12553) # Description playing with the NUON format in Rust code in some plugins, we agreed with the team it was a great time to create a standalone NUON format to allow Rust devs to use this Nushell file format. > **Note** > this PR almost copy-pastes the code from `nu_commands/src/formats/from/nuon.rs` and `nu_commands/src/formats/to/nuon.rs` to `nuon/src/from.rs` and `nuon/src/to.rs`, with minor tweaks to make then standalone functions, e.g. remove the rest of the command implementations ### TODO - [x] add tests - [x] add documentation # User-Facing Changes devs will have access to a new crate, `nuon`, and two functions, `from_nuon` and `to_nuon` ```rust from_nuon( input: &str, span: Option<Span>, ) -> Result<Value, ShellError> ``` ```rust to_nuon( input: &Value, raw: bool, tabs: Option<usize>, indent: Option<usize>, span: Option<Span>, ) -> Result<String, ShellError> ``` # Tests + Formatting i've basically taken all the tests from `crates/nu-command/tests/format_conversions/nuon.rs` and converted them to use `from_nuon` and `to_nuon` instead of Nushell commands - i've created a `nuon_end_to_end` to run both conversions with an optional middle value to check that all is fine > **Note** > the `nuon::tests::read_code_should_fail_rather_than_panic` test does give different results locally and in the CI... > i've left it ignored with comments to help future us :) # After Submitting mention that in the release notes for sure!!
2024-04-19 13:54:16 +02:00
let nuon = nuon::to_nuon(&value, true, None, None, Some(Span::unknown())).unwrap()
Fix memory consumption of into sqlite (#10232) # Description Currently, the `into sqlite` command collects the entire input stream into a single Value, which soaks up the entire input into memory, before it ever tries to write anything to the DB. This is very problematic for large inputs; for example, I tried transforming a multi-gigabyte CSV file into SQLite, and before I knew what was happening, my system's memory was completely exhausted, and I had to hard reboot to recover. This PR fixes this problem by working directly with the pipeline stream, inserting into the DB as values are read from the stream. In order to facilitate working with the stream directly, I introduced a new `Table` struct to store the connection and a few configuration parameters, as well as to make it easier to lazily create the table on the first read value. In addition to the purely functional fixes, a few other changes were made to the serialization and user facing behavior. ### Serialization Much of the preexisting code was focused on generating the exact text needed for a SQL statement. This is unneeded and less safe than using the `rusqlite` crate's serialization for native Rust types along with prepared statements. ### User-Facing Changes Currently, the command is very liberal in the input types it accepts. The strategy is basically if it is a record, try to follow its structure and make an analogous SQL row, which is pretty reasonable. However, when it's not a record, it basically tries to guess what the user wanted and just makes a single column table and serializes the value into that one column, whatever type it may be. This has been changed so that it only accepts records as input. If the user wants to serialize non-record types into SQL, then they must explicitly opt into doing this by constructing a record or table with it first. For a utility for inserting data into SQL, I think it makes more sense to let the user choose how to convert their data, rather than make a choice for them that may surprise them. However, I understand this may be a controversial change. If the maintainers don't agree, I can change this back. #### Long switch names The `file_name` and `table_name` long form switches are currently snake_case and expect to be as such at the command line. These have been changed to kebab-case to be more conventional. # Tests + Formatting To test the memory consumption, I used [this publicly available index of all Wikipedia articles](https://dumps.wikimedia.org/enwiki/20230820/), using the first 10,000, 100,000, and 1,000,000 entries, in that order. I ran the following script to benchmark the changes against the current stable release: ```nu #!/usr/bin/nu # let shellbin = $"($env.HOME)/src/nushell/target/aarch64-linux-android/release/nu" let shellbin = "nu" const dbpath = 'enwiki-index.db' [10000, 100000, 1000000] | each {|rows| rm -f $dbpath; do { time -f '%M %e %U %S' $shellbin -c ( $"bzip2 -cdk ~/enwiki-20230820-pages-articles-multistream-index.txt.bz2 | head -n ($rows) | lines | parse '{offset}:{id}:{title}' | update cells -c [offset, id] { into int } | into sqlite ($dbpath)" ) } | complete | get stderr | str trim | parse '{rss_max} {real} {user} {kernel}' | update cells -c [rss_max] { $"($in)kb" | into filesize } | update cells -c [real, user, kernel] { $"($in)sec" | into duration } | insert rows $rows | roll right } | flatten | to nuon ``` This yields the following results Current stable release: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|53.6 MiB|770ms|460ms|420ms| |100000|209.6 MiB|6sec 940ms|3sec 740ms|4sec 380ms| |1000000|1.7 GiB|1min 8sec 810ms|38sec 690ms|42sec 550ms| This PR: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|38.2 MiB|780ms|440ms|410ms| |100000|39.8 MiB|6sec 450ms|3sec 530ms|4sec 160ms| |1000000|39.8 MiB|1min 3sec 230ms|37sec 440ms|40sec 180ms| # Note I started this branch kind of at the same time as my others, but I understand the feedback that smaller PRs are preferred. Let me know if it would be better to split this up. I do think the scope of the changes are on the bigger side even without the behavior changes I mentioned, so I'm not sure if that will help this particular PR very much, but I'm happy to oblige on request.
2024-01-16 04:41:25 +01:00
+ &line_ending();
nuon_file.write_all(nuon.as_bytes()).unwrap();
expected_rows.push(row);
}
insert_test_rows(
&dirs,
&format!(
"open --raw {} | lines | each {{ from nuon }}",
nuon_path.to_string_lossy()
),
None,
expected_rows,
);
});
}
/// empty in, empty out
#[test]
fn into_sqlite_empty() {
Playground::setup("empty", |dirs, _| {
insert_test_rows(&dirs, r#"[]"#, Some("SELECT * FROM sqlite_schema;"), vec![]);
});
}
#[derive(Debug, PartialEq, Clone)]
struct TestRow(
bool,
i64,
f64,
i64,
i64,
chrono::DateTime<chrono::FixedOffset>,
std::string::String,
std::vec::Vec<u8>,
rusqlite::types::Value,
Fix memory consumption of into sqlite (#10232) # Description Currently, the `into sqlite` command collects the entire input stream into a single Value, which soaks up the entire input into memory, before it ever tries to write anything to the DB. This is very problematic for large inputs; for example, I tried transforming a multi-gigabyte CSV file into SQLite, and before I knew what was happening, my system's memory was completely exhausted, and I had to hard reboot to recover. This PR fixes this problem by working directly with the pipeline stream, inserting into the DB as values are read from the stream. In order to facilitate working with the stream directly, I introduced a new `Table` struct to store the connection and a few configuration parameters, as well as to make it easier to lazily create the table on the first read value. In addition to the purely functional fixes, a few other changes were made to the serialization and user facing behavior. ### Serialization Much of the preexisting code was focused on generating the exact text needed for a SQL statement. This is unneeded and less safe than using the `rusqlite` crate's serialization for native Rust types along with prepared statements. ### User-Facing Changes Currently, the command is very liberal in the input types it accepts. The strategy is basically if it is a record, try to follow its structure and make an analogous SQL row, which is pretty reasonable. However, when it's not a record, it basically tries to guess what the user wanted and just makes a single column table and serializes the value into that one column, whatever type it may be. This has been changed so that it only accepts records as input. If the user wants to serialize non-record types into SQL, then they must explicitly opt into doing this by constructing a record or table with it first. For a utility for inserting data into SQL, I think it makes more sense to let the user choose how to convert their data, rather than make a choice for them that may surprise them. However, I understand this may be a controversial change. If the maintainers don't agree, I can change this back. #### Long switch names The `file_name` and `table_name` long form switches are currently snake_case and expect to be as such at the command line. These have been changed to kebab-case to be more conventional. # Tests + Formatting To test the memory consumption, I used [this publicly available index of all Wikipedia articles](https://dumps.wikimedia.org/enwiki/20230820/), using the first 10,000, 100,000, and 1,000,000 entries, in that order. I ran the following script to benchmark the changes against the current stable release: ```nu #!/usr/bin/nu # let shellbin = $"($env.HOME)/src/nushell/target/aarch64-linux-android/release/nu" let shellbin = "nu" const dbpath = 'enwiki-index.db' [10000, 100000, 1000000] | each {|rows| rm -f $dbpath; do { time -f '%M %e %U %S' $shellbin -c ( $"bzip2 -cdk ~/enwiki-20230820-pages-articles-multistream-index.txt.bz2 | head -n ($rows) | lines | parse '{offset}:{id}:{title}' | update cells -c [offset, id] { into int } | into sqlite ($dbpath)" ) } | complete | get stderr | str trim | parse '{rss_max} {real} {user} {kernel}' | update cells -c [rss_max] { $"($in)kb" | into filesize } | update cells -c [real, user, kernel] { $"($in)sec" | into duration } | insert rows $rows | roll right } | flatten | to nuon ``` This yields the following results Current stable release: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|53.6 MiB|770ms|460ms|420ms| |100000|209.6 MiB|6sec 940ms|3sec 740ms|4sec 380ms| |1000000|1.7 GiB|1min 8sec 810ms|38sec 690ms|42sec 550ms| This PR: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|38.2 MiB|780ms|440ms|410ms| |100000|39.8 MiB|6sec 450ms|3sec 530ms|4sec 160ms| |1000000|39.8 MiB|1min 3sec 230ms|37sec 440ms|40sec 180ms| # Note I started this branch kind of at the same time as my others, but I understand the feedback that smaller PRs are preferred. Let me know if it would be better to split this up. I do think the scope of the changes are on the bigger side even without the behavior changes I mentioned, so I'm not sure if that will help this particular PR very much, but I'm happy to oblige on request.
2024-01-16 04:41:25 +01:00
);
impl TestRow {
pub fn random() -> Self {
StdRng::from_entropy().sample(Standard)
}
}
impl From<TestRow> for Value {
fn from(row: TestRow) -> Self {
Value::record(
record! {
"somebool" => Value::bool(row.0, Span::unknown()),
"someint" => Value::int(row.1, Span::unknown()),
"somefloat" => Value::float(row.2, Span::unknown()),
"somefilesize" => Value::filesize(row.3, Span::unknown()),
"someduration" => Value::duration(row.4, Span::unknown()),
"somedate" => Value::date(row.5, Span::unknown()),
"somestring" => Value::string(row.6, Span::unknown()),
"somebinary" => Value::binary(row.7, Span::unknown()),
"somenull" => Value::nothing(Span::unknown()),
},
Fix memory consumption of into sqlite (#10232) # Description Currently, the `into sqlite` command collects the entire input stream into a single Value, which soaks up the entire input into memory, before it ever tries to write anything to the DB. This is very problematic for large inputs; for example, I tried transforming a multi-gigabyte CSV file into SQLite, and before I knew what was happening, my system's memory was completely exhausted, and I had to hard reboot to recover. This PR fixes this problem by working directly with the pipeline stream, inserting into the DB as values are read from the stream. In order to facilitate working with the stream directly, I introduced a new `Table` struct to store the connection and a few configuration parameters, as well as to make it easier to lazily create the table on the first read value. In addition to the purely functional fixes, a few other changes were made to the serialization and user facing behavior. ### Serialization Much of the preexisting code was focused on generating the exact text needed for a SQL statement. This is unneeded and less safe than using the `rusqlite` crate's serialization for native Rust types along with prepared statements. ### User-Facing Changes Currently, the command is very liberal in the input types it accepts. The strategy is basically if it is a record, try to follow its structure and make an analogous SQL row, which is pretty reasonable. However, when it's not a record, it basically tries to guess what the user wanted and just makes a single column table and serializes the value into that one column, whatever type it may be. This has been changed so that it only accepts records as input. If the user wants to serialize non-record types into SQL, then they must explicitly opt into doing this by constructing a record or table with it first. For a utility for inserting data into SQL, I think it makes more sense to let the user choose how to convert their data, rather than make a choice for them that may surprise them. However, I understand this may be a controversial change. If the maintainers don't agree, I can change this back. #### Long switch names The `file_name` and `table_name` long form switches are currently snake_case and expect to be as such at the command line. These have been changed to kebab-case to be more conventional. # Tests + Formatting To test the memory consumption, I used [this publicly available index of all Wikipedia articles](https://dumps.wikimedia.org/enwiki/20230820/), using the first 10,000, 100,000, and 1,000,000 entries, in that order. I ran the following script to benchmark the changes against the current stable release: ```nu #!/usr/bin/nu # let shellbin = $"($env.HOME)/src/nushell/target/aarch64-linux-android/release/nu" let shellbin = "nu" const dbpath = 'enwiki-index.db' [10000, 100000, 1000000] | each {|rows| rm -f $dbpath; do { time -f '%M %e %U %S' $shellbin -c ( $"bzip2 -cdk ~/enwiki-20230820-pages-articles-multistream-index.txt.bz2 | head -n ($rows) | lines | parse '{offset}:{id}:{title}' | update cells -c [offset, id] { into int } | into sqlite ($dbpath)" ) } | complete | get stderr | str trim | parse '{rss_max} {real} {user} {kernel}' | update cells -c [rss_max] { $"($in)kb" | into filesize } | update cells -c [real, user, kernel] { $"($in)sec" | into duration } | insert rows $rows | roll right } | flatten | to nuon ``` This yields the following results Current stable release: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|53.6 MiB|770ms|460ms|420ms| |100000|209.6 MiB|6sec 940ms|3sec 740ms|4sec 380ms| |1000000|1.7 GiB|1min 8sec 810ms|38sec 690ms|42sec 550ms| This PR: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|38.2 MiB|780ms|440ms|410ms| |100000|39.8 MiB|6sec 450ms|3sec 530ms|4sec 160ms| |1000000|39.8 MiB|1min 3sec 230ms|37sec 440ms|40sec 180ms| # Note I started this branch kind of at the same time as my others, but I understand the feedback that smaller PRs are preferred. Let me know if it would be better to split this up. I do think the scope of the changes are on the bigger side even without the behavior changes I mentioned, so I'm not sure if that will help this particular PR very much, but I'm happy to oblige on request.
2024-01-16 04:41:25 +01:00
Span::unknown(),
)
}
}
impl<'r> TryFrom<&rusqlite::Row<'r>> for TestRow {
type Error = rusqlite::Error;
fn try_from(row: &rusqlite::Row) -> Result<Self, Self::Error> {
let somebool: bool = row.get("somebool").unwrap();
let someint: i64 = row.get("someint").unwrap();
let somefloat: f64 = row.get("somefloat").unwrap();
let somefilesize: i64 = row.get("somefilesize").unwrap();
let someduration: i64 = row.get("someduration").unwrap();
let somedate: DateTime<FixedOffset> = row.get("somedate").unwrap();
let somestring: String = row.get("somestring").unwrap();
let somebinary: Vec<u8> = row.get("somebinary").unwrap();
let somenull: rusqlite::types::Value = row.get("somenull").unwrap();
Fix memory consumption of into sqlite (#10232) # Description Currently, the `into sqlite` command collects the entire input stream into a single Value, which soaks up the entire input into memory, before it ever tries to write anything to the DB. This is very problematic for large inputs; for example, I tried transforming a multi-gigabyte CSV file into SQLite, and before I knew what was happening, my system's memory was completely exhausted, and I had to hard reboot to recover. This PR fixes this problem by working directly with the pipeline stream, inserting into the DB as values are read from the stream. In order to facilitate working with the stream directly, I introduced a new `Table` struct to store the connection and a few configuration parameters, as well as to make it easier to lazily create the table on the first read value. In addition to the purely functional fixes, a few other changes were made to the serialization and user facing behavior. ### Serialization Much of the preexisting code was focused on generating the exact text needed for a SQL statement. This is unneeded and less safe than using the `rusqlite` crate's serialization for native Rust types along with prepared statements. ### User-Facing Changes Currently, the command is very liberal in the input types it accepts. The strategy is basically if it is a record, try to follow its structure and make an analogous SQL row, which is pretty reasonable. However, when it's not a record, it basically tries to guess what the user wanted and just makes a single column table and serializes the value into that one column, whatever type it may be. This has been changed so that it only accepts records as input. If the user wants to serialize non-record types into SQL, then they must explicitly opt into doing this by constructing a record or table with it first. For a utility for inserting data into SQL, I think it makes more sense to let the user choose how to convert their data, rather than make a choice for them that may surprise them. However, I understand this may be a controversial change. If the maintainers don't agree, I can change this back. #### Long switch names The `file_name` and `table_name` long form switches are currently snake_case and expect to be as such at the command line. These have been changed to kebab-case to be more conventional. # Tests + Formatting To test the memory consumption, I used [this publicly available index of all Wikipedia articles](https://dumps.wikimedia.org/enwiki/20230820/), using the first 10,000, 100,000, and 1,000,000 entries, in that order. I ran the following script to benchmark the changes against the current stable release: ```nu #!/usr/bin/nu # let shellbin = $"($env.HOME)/src/nushell/target/aarch64-linux-android/release/nu" let shellbin = "nu" const dbpath = 'enwiki-index.db' [10000, 100000, 1000000] | each {|rows| rm -f $dbpath; do { time -f '%M %e %U %S' $shellbin -c ( $"bzip2 -cdk ~/enwiki-20230820-pages-articles-multistream-index.txt.bz2 | head -n ($rows) | lines | parse '{offset}:{id}:{title}' | update cells -c [offset, id] { into int } | into sqlite ($dbpath)" ) } | complete | get stderr | str trim | parse '{rss_max} {real} {user} {kernel}' | update cells -c [rss_max] { $"($in)kb" | into filesize } | update cells -c [real, user, kernel] { $"($in)sec" | into duration } | insert rows $rows | roll right } | flatten | to nuon ``` This yields the following results Current stable release: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|53.6 MiB|770ms|460ms|420ms| |100000|209.6 MiB|6sec 940ms|3sec 740ms|4sec 380ms| |1000000|1.7 GiB|1min 8sec 810ms|38sec 690ms|42sec 550ms| This PR: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|38.2 MiB|780ms|440ms|410ms| |100000|39.8 MiB|6sec 450ms|3sec 530ms|4sec 160ms| |1000000|39.8 MiB|1min 3sec 230ms|37sec 440ms|40sec 180ms| # Note I started this branch kind of at the same time as my others, but I understand the feedback that smaller PRs are preferred. Let me know if it would be better to split this up. I do think the scope of the changes are on the bigger side even without the behavior changes I mentioned, so I'm not sure if that will help this particular PR very much, but I'm happy to oblige on request.
2024-01-16 04:41:25 +01:00
Ok(TestRow(
somebool,
someint,
somefloat,
somefilesize,
someduration,
somedate,
somestring,
somebinary,
somenull,
Fix memory consumption of into sqlite (#10232) # Description Currently, the `into sqlite` command collects the entire input stream into a single Value, which soaks up the entire input into memory, before it ever tries to write anything to the DB. This is very problematic for large inputs; for example, I tried transforming a multi-gigabyte CSV file into SQLite, and before I knew what was happening, my system's memory was completely exhausted, and I had to hard reboot to recover. This PR fixes this problem by working directly with the pipeline stream, inserting into the DB as values are read from the stream. In order to facilitate working with the stream directly, I introduced a new `Table` struct to store the connection and a few configuration parameters, as well as to make it easier to lazily create the table on the first read value. In addition to the purely functional fixes, a few other changes were made to the serialization and user facing behavior. ### Serialization Much of the preexisting code was focused on generating the exact text needed for a SQL statement. This is unneeded and less safe than using the `rusqlite` crate's serialization for native Rust types along with prepared statements. ### User-Facing Changes Currently, the command is very liberal in the input types it accepts. The strategy is basically if it is a record, try to follow its structure and make an analogous SQL row, which is pretty reasonable. However, when it's not a record, it basically tries to guess what the user wanted and just makes a single column table and serializes the value into that one column, whatever type it may be. This has been changed so that it only accepts records as input. If the user wants to serialize non-record types into SQL, then they must explicitly opt into doing this by constructing a record or table with it first. For a utility for inserting data into SQL, I think it makes more sense to let the user choose how to convert their data, rather than make a choice for them that may surprise them. However, I understand this may be a controversial change. If the maintainers don't agree, I can change this back. #### Long switch names The `file_name` and `table_name` long form switches are currently snake_case and expect to be as such at the command line. These have been changed to kebab-case to be more conventional. # Tests + Formatting To test the memory consumption, I used [this publicly available index of all Wikipedia articles](https://dumps.wikimedia.org/enwiki/20230820/), using the first 10,000, 100,000, and 1,000,000 entries, in that order. I ran the following script to benchmark the changes against the current stable release: ```nu #!/usr/bin/nu # let shellbin = $"($env.HOME)/src/nushell/target/aarch64-linux-android/release/nu" let shellbin = "nu" const dbpath = 'enwiki-index.db' [10000, 100000, 1000000] | each {|rows| rm -f $dbpath; do { time -f '%M %e %U %S' $shellbin -c ( $"bzip2 -cdk ~/enwiki-20230820-pages-articles-multistream-index.txt.bz2 | head -n ($rows) | lines | parse '{offset}:{id}:{title}' | update cells -c [offset, id] { into int } | into sqlite ($dbpath)" ) } | complete | get stderr | str trim | parse '{rss_max} {real} {user} {kernel}' | update cells -c [rss_max] { $"($in)kb" | into filesize } | update cells -c [real, user, kernel] { $"($in)sec" | into duration } | insert rows $rows | roll right } | flatten | to nuon ``` This yields the following results Current stable release: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|53.6 MiB|770ms|460ms|420ms| |100000|209.6 MiB|6sec 940ms|3sec 740ms|4sec 380ms| |1000000|1.7 GiB|1min 8sec 810ms|38sec 690ms|42sec 550ms| This PR: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|38.2 MiB|780ms|440ms|410ms| |100000|39.8 MiB|6sec 450ms|3sec 530ms|4sec 160ms| |1000000|39.8 MiB|1min 3sec 230ms|37sec 440ms|40sec 180ms| # Note I started this branch kind of at the same time as my others, but I understand the feedback that smaller PRs are preferred. Let me know if it would be better to split this up. I do think the scope of the changes are on the bigger side even without the behavior changes I mentioned, so I'm not sure if that will help this particular PR very much, but I'm happy to oblige on request.
2024-01-16 04:41:25 +01:00
))
}
}
impl Distribution<TestRow> for Standard {
fn sample<R>(&self, rng: &mut R) -> TestRow
where
R: rand::Rng + ?Sized,
{
let dt = DateTime::from_timestamp_millis(rng.gen_range(0..2324252554000))
.unwrap()
.fixed_offset();
Fix memory consumption of into sqlite (#10232) # Description Currently, the `into sqlite` command collects the entire input stream into a single Value, which soaks up the entire input into memory, before it ever tries to write anything to the DB. This is very problematic for large inputs; for example, I tried transforming a multi-gigabyte CSV file into SQLite, and before I knew what was happening, my system's memory was completely exhausted, and I had to hard reboot to recover. This PR fixes this problem by working directly with the pipeline stream, inserting into the DB as values are read from the stream. In order to facilitate working with the stream directly, I introduced a new `Table` struct to store the connection and a few configuration parameters, as well as to make it easier to lazily create the table on the first read value. In addition to the purely functional fixes, a few other changes were made to the serialization and user facing behavior. ### Serialization Much of the preexisting code was focused on generating the exact text needed for a SQL statement. This is unneeded and less safe than using the `rusqlite` crate's serialization for native Rust types along with prepared statements. ### User-Facing Changes Currently, the command is very liberal in the input types it accepts. The strategy is basically if it is a record, try to follow its structure and make an analogous SQL row, which is pretty reasonable. However, when it's not a record, it basically tries to guess what the user wanted and just makes a single column table and serializes the value into that one column, whatever type it may be. This has been changed so that it only accepts records as input. If the user wants to serialize non-record types into SQL, then they must explicitly opt into doing this by constructing a record or table with it first. For a utility for inserting data into SQL, I think it makes more sense to let the user choose how to convert their data, rather than make a choice for them that may surprise them. However, I understand this may be a controversial change. If the maintainers don't agree, I can change this back. #### Long switch names The `file_name` and `table_name` long form switches are currently snake_case and expect to be as such at the command line. These have been changed to kebab-case to be more conventional. # Tests + Formatting To test the memory consumption, I used [this publicly available index of all Wikipedia articles](https://dumps.wikimedia.org/enwiki/20230820/), using the first 10,000, 100,000, and 1,000,000 entries, in that order. I ran the following script to benchmark the changes against the current stable release: ```nu #!/usr/bin/nu # let shellbin = $"($env.HOME)/src/nushell/target/aarch64-linux-android/release/nu" let shellbin = "nu" const dbpath = 'enwiki-index.db' [10000, 100000, 1000000] | each {|rows| rm -f $dbpath; do { time -f '%M %e %U %S' $shellbin -c ( $"bzip2 -cdk ~/enwiki-20230820-pages-articles-multistream-index.txt.bz2 | head -n ($rows) | lines | parse '{offset}:{id}:{title}' | update cells -c [offset, id] { into int } | into sqlite ($dbpath)" ) } | complete | get stderr | str trim | parse '{rss_max} {real} {user} {kernel}' | update cells -c [rss_max] { $"($in)kb" | into filesize } | update cells -c [real, user, kernel] { $"($in)sec" | into duration } | insert rows $rows | roll right } | flatten | to nuon ``` This yields the following results Current stable release: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|53.6 MiB|770ms|460ms|420ms| |100000|209.6 MiB|6sec 940ms|3sec 740ms|4sec 380ms| |1000000|1.7 GiB|1min 8sec 810ms|38sec 690ms|42sec 550ms| This PR: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|38.2 MiB|780ms|440ms|410ms| |100000|39.8 MiB|6sec 450ms|3sec 530ms|4sec 160ms| |1000000|39.8 MiB|1min 3sec 230ms|37sec 440ms|40sec 180ms| # Note I started this branch kind of at the same time as my others, but I understand the feedback that smaller PRs are preferred. Let me know if it would be better to split this up. I do think the scope of the changes are on the bigger side even without the behavior changes I mentioned, so I'm not sure if that will help this particular PR very much, but I'm happy to oblige on request.
2024-01-16 04:41:25 +01:00
let rand_string = Alphanumeric.sample_string(rng, 10);
// limit the size of the numbers to work around
// https://github.com/nushell/nushell/issues/10612
let filesize = rng.gen_range(-1024..=1024);
let duration = rng.gen_range(-1024..=1024);
TestRow(
rng.gen(),
rng.gen(),
rng.gen(),
filesize,
duration,
dt,
rand_string,
rng.gen::<u64>().to_be_bytes().to_vec(),
rusqlite::types::Value::Null,
Fix memory consumption of into sqlite (#10232) # Description Currently, the `into sqlite` command collects the entire input stream into a single Value, which soaks up the entire input into memory, before it ever tries to write anything to the DB. This is very problematic for large inputs; for example, I tried transforming a multi-gigabyte CSV file into SQLite, and before I knew what was happening, my system's memory was completely exhausted, and I had to hard reboot to recover. This PR fixes this problem by working directly with the pipeline stream, inserting into the DB as values are read from the stream. In order to facilitate working with the stream directly, I introduced a new `Table` struct to store the connection and a few configuration parameters, as well as to make it easier to lazily create the table on the first read value. In addition to the purely functional fixes, a few other changes were made to the serialization and user facing behavior. ### Serialization Much of the preexisting code was focused on generating the exact text needed for a SQL statement. This is unneeded and less safe than using the `rusqlite` crate's serialization for native Rust types along with prepared statements. ### User-Facing Changes Currently, the command is very liberal in the input types it accepts. The strategy is basically if it is a record, try to follow its structure and make an analogous SQL row, which is pretty reasonable. However, when it's not a record, it basically tries to guess what the user wanted and just makes a single column table and serializes the value into that one column, whatever type it may be. This has been changed so that it only accepts records as input. If the user wants to serialize non-record types into SQL, then they must explicitly opt into doing this by constructing a record or table with it first. For a utility for inserting data into SQL, I think it makes more sense to let the user choose how to convert their data, rather than make a choice for them that may surprise them. However, I understand this may be a controversial change. If the maintainers don't agree, I can change this back. #### Long switch names The `file_name` and `table_name` long form switches are currently snake_case and expect to be as such at the command line. These have been changed to kebab-case to be more conventional. # Tests + Formatting To test the memory consumption, I used [this publicly available index of all Wikipedia articles](https://dumps.wikimedia.org/enwiki/20230820/), using the first 10,000, 100,000, and 1,000,000 entries, in that order. I ran the following script to benchmark the changes against the current stable release: ```nu #!/usr/bin/nu # let shellbin = $"($env.HOME)/src/nushell/target/aarch64-linux-android/release/nu" let shellbin = "nu" const dbpath = 'enwiki-index.db' [10000, 100000, 1000000] | each {|rows| rm -f $dbpath; do { time -f '%M %e %U %S' $shellbin -c ( $"bzip2 -cdk ~/enwiki-20230820-pages-articles-multistream-index.txt.bz2 | head -n ($rows) | lines | parse '{offset}:{id}:{title}' | update cells -c [offset, id] { into int } | into sqlite ($dbpath)" ) } | complete | get stderr | str trim | parse '{rss_max} {real} {user} {kernel}' | update cells -c [rss_max] { $"($in)kb" | into filesize } | update cells -c [real, user, kernel] { $"($in)sec" | into duration } | insert rows $rows | roll right } | flatten | to nuon ``` This yields the following results Current stable release: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|53.6 MiB|770ms|460ms|420ms| |100000|209.6 MiB|6sec 940ms|3sec 740ms|4sec 380ms| |1000000|1.7 GiB|1min 8sec 810ms|38sec 690ms|42sec 550ms| This PR: |rows|rss_max|real|user|kernel| |-|-|-|-|-| |10000|38.2 MiB|780ms|440ms|410ms| |100000|39.8 MiB|6sec 450ms|3sec 530ms|4sec 160ms| |1000000|39.8 MiB|1min 3sec 230ms|37sec 440ms|40sec 180ms| # Note I started this branch kind of at the same time as my others, but I understand the feedback that smaller PRs are preferred. Let me know if it would be better to split this up. I do think the scope of the changes are on the bigger side even without the behavior changes I mentioned, so I'm not sure if that will help this particular PR very much, but I'm happy to oblige on request.
2024-01-16 04:41:25 +01:00
)
}
}
fn make_sqlite_db(dirs: &Dirs, nu_table: &str) -> PathBuf {
let testdir = dirs.test();
let testdb_path =
testdir.join(testdir.file_name().unwrap().to_str().unwrap().to_owned() + ".db");
let testdb = testdb_path.to_str().unwrap();
let nucmd = nu!(
cwd: testdir,
pipeline(&format!("{nu_table} | into sqlite {testdb}"))
);
assert!(nucmd.status.success());
testdb_path
}
fn insert_test_rows(dirs: &Dirs, nu_table: &str, sql_query: Option<&str>, expected: Vec<TestRow>) {
let sql_query = sql_query.unwrap_or("SELECT * FROM main;");
let testdb = make_sqlite_db(dirs, nu_table);
let conn = rusqlite::Connection::open(testdb).unwrap();
let mut stmt = conn.prepare(sql_query).unwrap();
let actual_rows: Vec<_> = stmt
.query_and_then([], |row| TestRow::try_from(row))
.unwrap()
.map(|row| row.unwrap())
.collect();
assert_eq!(expected, actual_rows);
}