feat: rework record sync for improved reliability

So, to tell a story

1. We introduced the record sync, intended to be the new algorithm to
   sync history.
2. On top of this, I added the KV store. This was intended as a simple
   test of the record sync, and to see if people wanted that sort of
   functionality
3. History remained syncing via the old means, as while it had issues it
   worked more-or-less OK. And we are aware of its flaws
4. If KV syncing worked ok, history would be moved across

KV syncing ran ok for 6mo or so, so I started to move across history.
For several weeks, I ran a local fork of Atuin + the server that synced
via records instead.

The record store maintained ordering via a linked list, which was a
mistake. It performed well in testing, but was really difficult to debug
and reason about. So when a few small sync issues occured, they took an
extremely long time to debug.

This PR is huge, which I regret. It involves replacing the "parent"
relationship that records once had (pointing to the previous record)
with a simple index (generally referred to as idx). This also means we
had to change the recordindex, which referenced "tails". Tails were the
last item in the chain.

Now that we use an "array" vs linked list, that logic was also replaced.
And is much simpler :D

Same for the queries that act on this data.

----

This isn't final - we still need to add

1. Proper server/client error handling, which has been lacking for a
   while
2. The actual history implementation on top
    This exists in a branch, just without deletions. Won't be much to
    add that, I just don't want to make this any larger than it already
    is

The _only_ caveat here is that we basically lose data synced via the old
record store. This is the KV data from before.

It hasn't been deleted or anything, just no longer hooked up. So it's
totally possible to write a migration script. I just need to do that.
This commit is contained in:
Ellie Huxtable 2024-01-01 14:12:20 +00:00
parent e2a4e9cf13
commit 29f4a93e30
50 changed files with 1513 additions and 1208 deletions

View File

@ -1,84 +0,0 @@
body:
- type: input
attributes:
label: Operating System
description: What operating system are you using?
placeholder: "Example: macOS Big Sur"
validations:
required: true
- type: input
attributes:
label: Shell
description: What shell are you using?
placeholder: "Example: zsh 5.8.1"
validations:
required: true
- type: dropdown
attributes:
label: Version
description: What version of atuin are you running?
multiple: false
options: # how often will I forget to update this? a lot.
- v17.0.0 (Default)
- v16.0.0
- v15.0.0
- v14.0.1
- v14.0.0
- v13.0.1
- v13.0.0
- v12.0.0
- v11.0.0
- v0.10.0
- v0.9.1
- v0.9.0
- v0.8.1
- v0.8.0
- v0.7.2
- v0.7.1
- v0.7.0
- v0.6.4
- v0.6.3
default: 0
validations:
required: true
- type: checkboxes
attributes:
label: Self hosted
description: Are you self hosting atuin server?
options:
- label: I am self hosting atuin server
- type: checkboxes
attributes:
label: Search the issues
description: Did you search the issues and discussions for your problem?
options:
- label: I checked that someone hasn't already asked about the same issue
required: true
- type: textarea
attributes:
label: Behaviour
description: "Please describe the issue - what you expected to happen, what actually happened"
- type: textarea
attributes:
label: Logs
description: "If possible, please include logs from atuin, especially if you self host the server - ATUIN_LOG=debug"
- type: textarea
attributes:
label: Extra information
description: "Anything else you'd like to add?"
- type: checkboxes
attributes:
label: Code of Conduct
description: The Code of Conduct helps create a safe space for everyone. We require
that everyone agrees to it.
options:
- label: I agree to follow this project's [Code of Conduct](https://github.com/atuinsh/atuin/blob/main/CODE_OF_CONDUCT.md)
required: true

1
.gitignore vendored
View File

@ -1,3 +1,4 @@
.DS_Store
/target /target
*/target */target
.env .env

324
Cargo.lock generated
View File

@ -57,9 +57,9 @@ checksum = "0942ffc6dcaadf03badf6e6a2d0228460359d5e34b57ccdc720b7382dfbd5ec5"
[[package]] [[package]]
name = "anstream" name = "anstream"
version = "0.6.5" version = "0.6.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d664a92ecae85fd0a7392615844904654d1d5f5514837f471ddef4a057aba1b6" checksum = "2ab91ebe16eb252986481c5b62f6098f3b698a45e34b5b98200cf20dd2484a44"
dependencies = [ dependencies = [
"anstyle", "anstyle",
"anstyle-parse", "anstyle-parse",
@ -77,30 +77,30 @@ checksum = "7079075b41f533b8c61d2a4d073c4676e1f8b249ff94a393b0595db304e0dd87"
[[package]] [[package]]
name = "anstyle-parse" name = "anstyle-parse"
version = "0.2.3" version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c75ac65da39e5fe5ab759307499ddad880d724eed2f6ce5b5e8a26f4f387928c" checksum = "317b9a89c1868f5ea6ff1d9539a69f45dffc21ce321ac1fd1160dfa48c8e2140"
dependencies = [ dependencies = [
"utf8parse", "utf8parse",
] ]
[[package]] [[package]]
name = "anstyle-query" name = "anstyle-query"
version = "1.0.2" version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e28923312444cdd728e4738b3f9c9cac739500909bb3d3c94b43551b16517648" checksum = "5ca11d4be1bab0c8bc8734a9aa7bf4ee8316d462a08c6ac5052f888fef5b494b"
dependencies = [ dependencies = [
"windows-sys 0.52.0", "windows-sys 0.48.0",
] ]
[[package]] [[package]]
name = "anstyle-wincon" name = "anstyle-wincon"
version = "3.0.2" version = "3.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1cd54b81ec8d6180e24654d0b371ad22fc3dd083b6ff8ba325b72e00c87660a7" checksum = "f0699d10d2f4d628a98ee7b57b289abbc98ff3bad977cb3152709d4bf2330628"
dependencies = [ dependencies = [
"anstyle", "anstyle",
"windows-sys 0.52.0", "windows-sys 0.48.0",
] ]
[[package]] [[package]]
@ -109,12 +109,6 @@ version = "1.0.75"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a4668cab20f66d8d020e1fbc0ebe47217433c1b6c8f2040faf858554e394ace6" checksum = "a4668cab20f66d8d020e1fbc0ebe47217433c1b6c8f2040faf858554e394ace6"
[[package]]
name = "arc-swap"
version = "1.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bddcadddf5e9015d310179a59bb28c4d4b9920ad0f11e8e14dbadf654890c9a6"
[[package]] [[package]]
name = "argon2" name = "argon2"
version = "0.5.2" version = "0.5.2"
@ -135,7 +129,7 @@ checksum = "a66537f1bb974b254c98ed142ff995236e81b9d0fe4db0575f46612cb15eb0f9"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
"syn 2.0.41", "syn 2.0.39",
] ]
[[package]] [[package]]
@ -147,12 +141,6 @@ dependencies = [
"num-traits", "num-traits",
] ]
[[package]]
name = "atomic"
version = "0.5.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c59bdb34bc650a32731b31bd8f0829cc15d24a708ee31559e0bb34f2bc320cba"
[[package]] [[package]]
name = "atomic-write-file" name = "atomic-write-file"
version = "0.1.2" version = "0.1.2"
@ -186,7 +174,7 @@ dependencies = [
"fuzzy-matcher", "fuzzy-matcher",
"indicatif", "indicatif",
"interim", "interim",
"itertools 0.12.0", "itertools",
"log", "log",
"ratatui", "ratatui",
"rpassword", "rpassword",
@ -221,7 +209,7 @@ dependencies = [
"generic-array", "generic-array",
"hex", "hex",
"interim", "interim",
"itertools 0.12.0", "itertools",
"lazy_static", "lazy_static",
"log", "log",
"memchr", "memchr",
@ -242,6 +230,7 @@ dependencies = [
"shellexpand", "shellexpand",
"sql-builder", "sql-builder",
"sqlx", "sqlx",
"thiserror",
"time", "time",
"tokio", "tokio",
"typed-builder", "typed-builder",
@ -255,10 +244,8 @@ name = "atuin-common"
version = "17.1.0" version = "17.1.0"
dependencies = [ dependencies = [
"eyre", "eyre",
"lazy_static",
"pretty_assertions", "pretty_assertions",
"rand", "rand",
"semver",
"serde", "serde",
"sqlx", "sqlx",
"time", "time",
@ -275,20 +262,15 @@ dependencies = [
"atuin-common", "atuin-common",
"atuin-server-database", "atuin-server-database",
"axum", "axum",
"axum-server",
"base64 0.21.5", "base64 0.21.5",
"config", "config",
"eyre", "eyre",
"fs-err", "fs-err",
"http", "http",
"hyper",
"hyper-rustls",
"metrics", "metrics",
"metrics-exporter-prometheus", "metrics-exporter-prometheus",
"rand", "rand",
"reqwest", "reqwest",
"rustls",
"rustls-pemfile",
"semver", "semver",
"serde", "serde",
"serde_json", "serde_json",
@ -383,26 +365,6 @@ dependencies = [
"tower-service", "tower-service",
] ]
[[package]]
name = "axum-server"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "447f28c85900215cc1bea282f32d4a2f22d55c5a300afdfbc661c8d6a632e063"
dependencies = [
"arc-swap",
"bytes",
"futures-util",
"http",
"http-body",
"hyper",
"pin-project-lite",
"rustls",
"rustls-pemfile",
"tokio",
"tokio-rustls",
"tower-service",
]
[[package]] [[package]]
name = "backtrace" name = "backtrace"
version = "0.3.69" version = "0.3.69"
@ -581,9 +543,9 @@ dependencies = [
[[package]] [[package]]
name = "clap" name = "clap"
version = "4.4.11" version = "4.4.10"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bfaff671f6b22ca62406885ece523383b9b64022e341e53e009a62ebc47a45f2" checksum = "41fffed7514f420abec6d183b1d3acfd9099c79c3a10a06ade4f8203f1411272"
dependencies = [ dependencies = [
"clap_builder", "clap_builder",
"clap_derive", "clap_derive",
@ -591,9 +553,9 @@ dependencies = [
[[package]] [[package]]
name = "clap_builder" name = "clap_builder"
version = "4.4.11" version = "4.4.9"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a216b506622bb1d316cd51328dce24e07bdff4a6128a47c7e7fad11878d5adbb" checksum = "63361bae7eef3771745f02d8d892bec2fee5f6e34af316ba556e7f97a7069ff1"
dependencies = [ dependencies = [
"anstream", "anstream",
"anstyle", "anstyle",
@ -619,7 +581,7 @@ dependencies = [
"heck", "heck",
"proc-macro2", "proc-macro2",
"quote", "quote",
"syn 2.0.41", "syn 2.0.39",
] ]
[[package]] [[package]]
@ -661,10 +623,11 @@ checksum = "acbf1af155f9b9ef647e42cdc158db4b64a1b61f743629225fde6f3e0be2a7c7"
[[package]] [[package]]
name = "colored" name = "colored"
version = "2.1.0" version = "2.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cbf2150cce219b664a8a70df7a1f933836724b503f8a413af9365b4dcc4d90b8" checksum = "2674ec482fbc38012cf31e6c42ba0177b431a0cb6f15fe40efa5aab1bda516f6"
dependencies = [ dependencies = [
"is-terminal",
"lazy_static", "lazy_static",
"windows-sys 0.48.0", "windows-sys 0.48.0",
] ]
@ -698,9 +661,9 @@ dependencies = [
[[package]] [[package]]
name = "const-oid" name = "const-oid"
version = "0.9.6" version = "0.9.5"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c2459377285ad874054d797f3ccebf984978aa39129f6eafde5cdc8315b612f8" checksum = "28c122c3980598d243d63d9a704629a2d748d101f278052ff068be5a4423ab6f"
[[package]] [[package]]
name = "core-foundation" name = "core-foundation"
@ -744,21 +707,22 @@ checksum = "19d374276b40fb8bbdee95aef7c7fa6b5316ec764510eb64b8dd0e2ed0d7e7f5"
[[package]] [[package]]
name = "crossbeam-epoch" name = "crossbeam-epoch"
version = "0.9.16" version = "0.9.15"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2d2fe95351b870527a5d09bf563ed3c97c0cffb87cf1c78a591bf48bb218d9aa" checksum = "ae211234986c545741a7dc064309f67ee1e5ad243d0e48335adc0484d960bcc7"
dependencies = [ dependencies = [
"autocfg", "autocfg",
"cfg-if", "cfg-if",
"crossbeam-utils", "crossbeam-utils",
"memoffset 0.9.0", "memoffset 0.9.0",
"scopeguard",
] ]
[[package]] [[package]]
name = "crossbeam-queue" name = "crossbeam-queue"
version = "0.3.9" version = "0.3.8"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b9bcf5bdbfdd6030fb4a1c497b5d5fc5921aa2f60d359a17e249c0e6df3de153" checksum = "d1cfb3ea8a53f37c40dea2c7bedcbd88bdfae54f5e2175d6ecaff1c988353add"
dependencies = [ dependencies = [
"cfg-if", "cfg-if",
"crossbeam-utils", "crossbeam-utils",
@ -766,9 +730,9 @@ dependencies = [
[[package]] [[package]]
name = "crossbeam-utils" name = "crossbeam-utils"
version = "0.8.17" version = "0.8.16"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c06d96137f14f244c37f989d9fff8f95e6c18b918e71f36638f8c49112e4c78f" checksum = "5a22b2d63d4d1dc0b7f1b6b2747dd0088008a9be28b6ddf0b1e7d335e3037294"
dependencies = [ dependencies = [
"cfg-if", "cfg-if",
] ]
@ -860,7 +824,7 @@ checksum = "f46882e17999c6cc590af592290432be3bce0428cb0d5f8b6715e4dc7b383eb3"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
"syn 2.0.41", "syn 2.0.39",
] ]
[[package]] [[package]]
@ -876,11 +840,10 @@ dependencies = [
[[package]] [[package]]
name = "deranged" name = "deranged"
version = "0.3.10" version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8eb30d70a07a3b04884d2677f06bec33509dc67ca60d92949e5535352d3191dc" checksum = "0f32d04922c60427da6f9fef14d042d9edddef64cb9d4ce0d64d0685fbeb1fd3"
dependencies = [ dependencies = [
"powerfmt",
"serde", "serde",
] ]
@ -924,11 +887,11 @@ dependencies = [
[[package]] [[package]]
name = "directories" name = "directories"
version = "5.0.1" version = "4.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9a49173b84e034382284f27f1af4dcbbd231ffa358c0fe316541a7337f376a35" checksum = "f51c5d4ddabd36886dd3e1438cb358cdcb0d7c499cb99cb4ac2e38e18b5cb210"
dependencies = [ dependencies = [
"dirs-sys", "dirs-sys 0.3.7",
] ]
[[package]] [[package]]
@ -937,7 +900,18 @@ version = "5.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "44c45a9d03d6676652bcb5e724c7e988de1acad23a711b5217ab9cbecbec2225" checksum = "44c45a9d03d6676652bcb5e724c7e988de1acad23a711b5217ab9cbecbec2225"
dependencies = [ dependencies = [
"dirs-sys", "dirs-sys 0.4.1",
]
[[package]]
name = "dirs-sys"
version = "0.3.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1b1d1d91c932ef41c0f2663aa8b0ca0342d444d842c06914aa0a7e352d0bada6"
dependencies = [
"libc",
"redox_users",
"winapi",
] ]
[[package]] [[package]]
@ -1070,9 +1044,9 @@ checksum = "0206175f82b8d6bf6652ff7d71a1e27fd2e4efde587fd368662814d6ec1d9ce0"
[[package]] [[package]]
name = "eyre" name = "eyre"
version = "0.6.11" version = "0.6.9"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b6267a1fa6f59179ea4afc8e50fd8612a3cc60bc858f786ff877a4a8cb042799" checksum = "80f656be11ddf91bd709454d15d5bd896fbaf4cc3314e69349e4d1569f5b46cd"
dependencies = [ dependencies = [
"indenter", "indenter",
"once_cell", "once_cell",
@ -1215,7 +1189,7 @@ checksum = "53b153fd91e4b0147f4aced87be237c98248656bb01050b96bf3ee89220a8ddb"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
"syn 2.0.41", "syn 2.0.39",
] ]
[[package]] [[package]]
@ -1372,9 +1346,9 @@ checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70"
[[package]] [[package]]
name = "hkdf" name = "hkdf"
version = "0.12.4" version = "0.12.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7b5f8eb2ad728638ea2c7d47a21db23b7b58a72ed6a38256b8a1849f15fbbdf7" checksum = "791a029f6b9fc27657f6f188ec6e5e43f6911f6f878e0dc5501396e09809d437"
dependencies = [ dependencies = [
"hmac", "hmac",
] ]
@ -1390,11 +1364,11 @@ dependencies = [
[[package]] [[package]]
name = "home" name = "home"
version = "0.5.9" version = "0.5.5"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e3d1354bf6b7235cb4a0576c2619fd4ed18183f689b12b006a0ee7329eeff9a5" checksum = "5444c27eef6923071f7ebcc33e3444508466a76f7a2b93da00ed6e19f30c1ddb"
dependencies = [ dependencies = [
"windows-sys 0.52.0", "windows-sys 0.48.0",
] ]
[[package]] [[package]]
@ -1410,9 +1384,9 @@ dependencies = [
[[package]] [[package]]
name = "http-body" name = "http-body"
version = "0.4.6" version = "0.4.5"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7ceab25649e9960c0311ea418d17bee82c0dcec1bd053b5f9a66e265a693bed2" checksum = "d5f38f16d184e36f2408a55281cd658ecbd3ca05cce6d6510a176eca393e26d1"
dependencies = [ dependencies = [
"bytes", "bytes",
"http", "http",
@ -1476,9 +1450,7 @@ dependencies = [
"futures-util", "futures-util",
"http", "http",
"hyper", "hyper",
"log",
"rustls", "rustls",
"rustls-native-certs",
"tokio", "tokio",
"tokio-rustls", "tokio-rustls",
] ]
@ -1601,20 +1573,11 @@ dependencies = [
"either", "either",
] ]
[[package]]
name = "itertools"
version = "0.12.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "25db6b064527c5d482d0423354fcd07a89a2dfe07b67892e62411946db7f07b0"
dependencies = [
"either",
]
[[package]] [[package]]
name = "itoa" name = "itoa"
version = "1.0.10" version = "1.0.9"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b1a46d1a171d865aa5f83f92695765caa047a9b4cbae2cbf37dbd613a793fd4c" checksum = "af150ab688ff2122fcef229be89cb50dd66af9e01a4ff320cc137eecc9bacc38"
[[package]] [[package]]
name = "js-sys" name = "js-sys"
@ -1636,9 +1599,9 @@ dependencies = [
[[package]] [[package]]
name = "libc" name = "libc"
version = "0.2.151" version = "0.2.150"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "302d7ab3130588088d277783b1e2d2e10c9e9e4a16dd9050e6ec93fb3e7048f4" checksum = "89d92a4743f9a61002fae18374ed11e7973f530cb3a3255fb354818118b2203c"
[[package]] [[package]]
name = "libm" name = "libm"
@ -1710,7 +1673,7 @@ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
"regex-syntax 0.6.29", "regex-syntax 0.6.29",
"syn 2.0.41", "syn 2.0.39",
] ]
[[package]] [[package]]
@ -1811,9 +1774,9 @@ dependencies = [
[[package]] [[package]]
name = "metrics-exporter-prometheus" name = "metrics-exporter-prometheus"
version = "0.12.2" version = "0.12.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1d4fa7ce7c4862db464a37b0b31d89bca874562f034bd7993895572783d02950" checksum = "8a4964177ddfdab1e3a2b37aec7cf320e14169abb0ed73999f558136409178d5"
dependencies = [ dependencies = [
"base64 0.21.5", "base64 0.21.5",
"hyper", "hyper",
@ -1835,7 +1798,7 @@ checksum = "ddece26afd34c31585c74a4db0630c376df271c285d682d1e55012197830b6df"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
"syn 2.0.41", "syn 2.0.39",
] ]
[[package]] [[package]]
@ -1882,9 +1845,9 @@ checksum = "1269a17ac308ae0b906ec1b0ff8062fd0c82f18cc2956faa367302ec3380f4e8"
[[package]] [[package]]
name = "mio" name = "mio"
version = "0.8.10" version = "0.8.9"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8f3d0b296e374a4e6f3c7b0a1f5a51d748a0d34c85e7dc48fc3fa9a87657fe09" checksum = "3dce281c5e46beae905d4de1870d8b1509a9142b62eedf18b443b011ca8343d0"
dependencies = [ dependencies = [
"libc", "libc",
"log", "log",
@ -1935,15 +1898,6 @@ dependencies = [
"winapi", "winapi",
] ]
[[package]]
name = "nu-ansi-term"
version = "0.49.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c073d3c1930d0751774acf49e66653acecb416c3a54c6ec095a9b11caddb5a68"
dependencies = [
"windows-sys 0.48.0",
]
[[package]] [[package]]
name = "num" name = "num"
version = "0.2.1" version = "0.2.1"
@ -2104,9 +2058,9 @@ dependencies = [
[[package]] [[package]]
name = "once_cell" name = "once_cell"
version = "1.19.0" version = "1.18.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3fdb12b2476b595f9358c5161aa467c2438859caa136dec86c26fdd2efe17b92" checksum = "dd8b5dd2ae5ed71462c540258bedcb51965123ad7e7ccf4b9a8cafaa4a63576d"
[[package]] [[package]]
name = "opaque-debug" name = "opaque-debug"
@ -2250,7 +2204,7 @@ checksum = "4359fd9c9171ec6e8c62926d6faaf553a8dc3f64e1507e76da7911b4f6a04405"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
"syn 2.0.41", "syn 2.0.39",
] ]
[[package]] [[package]]
@ -2311,15 +2265,9 @@ dependencies = [
[[package]] [[package]]
name = "portable-atomic" name = "portable-atomic"
version = "1.6.0" version = "1.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7170ef9988bc169ba16dd36a7fa041e5c4cbeb6a35b76d4c03daded371eae7c0" checksum = "3bccab0e7fd7cc19f820a1c8c91720af652d0c88dc9664dd72aef2614f04af3b"
[[package]]
name = "powerfmt"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "439ee305def115ba05938db6eb1644ff94165c5ab5e9420d1c1bcedbba909391"
[[package]] [[package]]
name = "ppv-lite86" name = "ppv-lite86"
@ -2411,7 +2359,7 @@ dependencies = [
"cassowary", "cassowary",
"crossterm", "crossterm",
"indoc", "indoc",
"itertools 0.11.0", "itertools",
"lru", "lru",
"paste", "paste",
"strum", "strum",
@ -2549,9 +2497,9 @@ dependencies = [
[[package]] [[package]]
name = "ring" name = "ring"
version = "0.17.7" version = "0.17.6"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "688c63d65483050968b2a8937f7995f443e27041a0f7700aa59b0822aedebb74" checksum = "684d5e6e18f669ccebf64a92236bb7db9a34f07be010e3627368182027180866"
dependencies = [ dependencies = [
"cc", "cc",
"getrandom", "getrandom",
@ -2645,9 +2593,9 @@ dependencies = [
[[package]] [[package]]
name = "rustix" name = "rustix"
version = "0.38.28" version = "0.38.26"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "72e572a5e8ca657d7366229cdde4bd14c4eb5499a9573d4d366fe1b599daa316" checksum = "9470c4bf8246c8daf25f9598dca807fb6510347b1e1cfa55749113850c79d88a"
dependencies = [ dependencies = [
"bitflags 2.4.1", "bitflags 2.4.1",
"errno", "errno",
@ -2658,12 +2606,12 @@ dependencies = [
[[package]] [[package]]
name = "rustls" name = "rustls"
version = "0.21.10" version = "0.21.9"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f9d5a6813c0759e4609cd494e8e725babae6a2ca7b62a5536a13daaec6fcb7ba" checksum = "629648aced5775d558af50b2b4c7b02983a04b312126d45eeead26e7caa498b9"
dependencies = [ dependencies = [
"log", "log",
"ring 0.17.7", "ring 0.17.6",
"rustls-webpki", "rustls-webpki",
"sct", "sct",
] ]
@ -2695,7 +2643,7 @@ version = "0.101.7"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8b6275d1ee7a1cd780b64aca7726599a1dbc893b1e64144529e55c3c2f745765" checksum = "8b6275d1ee7a1cd780b64aca7726599a1dbc893b1e64144529e55c3c2f745765"
dependencies = [ dependencies = [
"ring 0.17.7", "ring 0.17.6",
"untrusted 0.9.0", "untrusted 0.9.0",
] ]
@ -2748,9 +2696,9 @@ dependencies = [
[[package]] [[package]]
name = "ryu" name = "ryu"
version = "1.0.16" version = "1.0.15"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f98d2aa92eebf49b69786be48e4477826b256916e84a57ff2a4f21923b48eb4c" checksum = "1ad4cc8da4ef723ed60bced201181d83791ad433213d8c24efffda1eec85d741"
[[package]] [[package]]
name = "salsa20" name = "salsa20"
@ -2782,7 +2730,7 @@ version = "0.7.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "da046153aa2352493d6cb7da4b6e5c0c057d8a1d0a9aa8560baffdd945acd414" checksum = "da046153aa2352493d6cb7da4b6e5c0c057d8a1d0a9aa8560baffdd945acd414"
dependencies = [ dependencies = [
"ring 0.17.7", "ring 0.17.6",
"untrusted 0.9.0", "untrusted 0.9.0",
] ]
@ -2817,22 +2765,22 @@ checksum = "836fa6a3e1e547f9a2c4040802ec865b5d85f4014efe00555d7090a3dcaa1090"
[[package]] [[package]]
name = "serde" name = "serde"
version = "1.0.193" version = "1.0.171"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "25dd9975e68d0cb5aa1120c288333fc98731bd1dd12f561e468ea4728c042b89" checksum = "30e27d1e4fd7659406c492fd6cfaf2066ba8773de45ca75e855590f856dc34a9"
dependencies = [ dependencies = [
"serde_derive", "serde_derive",
] ]
[[package]] [[package]]
name = "serde_derive" name = "serde_derive"
version = "1.0.193" version = "1.0.171"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "43576ca501357b9b071ac53cdc7da8ef0cbd9493d8df094cd821777ea6e894d3" checksum = "389894603bd18c46fa56231694f8d827779c0951a667087194cf9de94ed24682"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
"syn 2.0.41", "syn 2.0.39",
] ]
[[package]] [[package]]
@ -3036,11 +2984,11 @@ dependencies = [
[[package]] [[package]]
name = "sqlformat" name = "sqlformat"
version = "0.2.3" version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ce81b7bd7c4493975347ef60d8c7e8b742d4694f4c49f93e0a12ea263938176c" checksum = "6b7b278788e7be4d0d29c0f39497a0eef3fba6bbc8e70d8bf7fde46edeaa9e85"
dependencies = [ dependencies = [
"itertools 0.12.0", "itertools",
"nom", "nom",
"unicode_categories", "unicode_categories",
] ]
@ -3295,7 +3243,7 @@ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
"rustversion", "rustversion",
"syn 2.0.41", "syn 2.0.39",
] ]
[[package]] [[package]]
@ -3317,9 +3265,9 @@ dependencies = [
[[package]] [[package]]
name = "syn" name = "syn"
version = "2.0.41" version = "2.0.39"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "44c8b28c477cc3bf0e7966561e3460130e1255f7a1cf71931075f1c5e7a7e269" checksum = "23e78b90f2fcf45d3e842032ce32e3f2d1545ba6636271dcbf24fa306d87be7a"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
@ -3377,22 +3325,22 @@ dependencies = [
[[package]] [[package]]
name = "thiserror" name = "thiserror"
version = "1.0.51" version = "1.0.50"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f11c217e1416d6f036b870f14e0413d480dbf28edbee1f877abaf0206af43bb7" checksum = "f9a7210f5c9a7156bb50aa36aed4c95afb51df0df00713949448cf9e97d382d2"
dependencies = [ dependencies = [
"thiserror-impl", "thiserror-impl",
] ]
[[package]] [[package]]
name = "thiserror-impl" name = "thiserror-impl"
version = "1.0.51" version = "1.0.50"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "01742297787513b79cf8e29d1056ede1313e2420b7b3b15d0a768b4921f549df" checksum = "266b2e40bc00e5a6c09c3584011e08b06f123c00362c92b975ba9843aaaa14b8"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
"syn 2.0.41", "syn 2.0.39",
] ]
[[package]] [[package]]
@ -3407,15 +3355,14 @@ dependencies = [
[[package]] [[package]]
name = "time" name = "time"
version = "0.3.30" version = "0.3.26"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c4a34ab300f2dee6e562c10a046fc05e358b29f9bf92277f30c3c8d82275f6f5" checksum = "a79d09ac6b08c1ab3906a2f7cc2e81a0e27c7ae89c63812df75e52bef0751e07"
dependencies = [ dependencies = [
"deranged", "deranged",
"itoa", "itoa",
"libc", "libc",
"num_threads", "num_threads",
"powerfmt",
"serde", "serde",
"time-core", "time-core",
"time-macros", "time-macros",
@ -3423,15 +3370,15 @@ dependencies = [
[[package]] [[package]]
name = "time-core" name = "time-core"
version = "0.1.2" version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ef927ca75afb808a4d64dd374f00a2adf8d0fcff8e7b184af886c3c87ec4a3f3" checksum = "7300fbefb4dadc1af235a9cef3737cea692a9d97e1b9cbcd4ebdae6f8868e6fb"
[[package]] [[package]]
name = "time-macros" name = "time-macros"
version = "0.2.15" version = "0.2.12"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4ad70d68dba9e1f8aceda7aa6711965dfec1cac869f311a51bd08b3a2ccbce20" checksum = "75c65469ed6b3a4809d987a41eb1dc918e9bc1d92211cbad7ae82931846f7451"
dependencies = [ dependencies = [
"time-core", "time-core",
] ]
@ -3472,9 +3419,9 @@ checksum = "1f3ccbac311fea05f86f61904b462b55fb3df8837a366dfc601a0161d0532f20"
[[package]] [[package]]
name = "tokio" name = "tokio"
version = "1.35.0" version = "1.34.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "841d45b238a16291a4e1584e61820b8ae57d696cc5015c459c229ccc6990cc1c" checksum = "d0c014766411e834f7af5b8f4cf46257aab4036ca95e9d2c144a10f59ad6f5b9"
dependencies = [ dependencies = [
"backtrace", "backtrace",
"bytes", "bytes",
@ -3497,7 +3444,7 @@ checksum = "5b8a1e28f2deaa14e508979454cb3a223b10b938b45af148bc0986de36f1923b"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
"syn 2.0.41", "syn 2.0.39",
] ]
[[package]] [[package]]
@ -3611,7 +3558,7 @@ checksum = "34704c8d6ebcbc939824180af020566b01a7c01f80641264eba0999f6c2b6be7"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
"syn 2.0.41", "syn 2.0.39",
] ]
[[package]] [[package]]
@ -3626,9 +3573,9 @@ dependencies = [
[[package]] [[package]]
name = "tracing-log" name = "tracing-log"
version = "0.2.0" version = "0.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ee855f1f400bd0e5c02d150ae5de3840039a3f54b025156404e34c23c03f47c3" checksum = "f751112709b4e791d8ce53e32c4ed2d353565a795ce84da2285393f41557bdf2"
dependencies = [ dependencies = [
"log", "log",
"once_cell", "once_cell",
@ -3642,7 +3589,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ad0f048c97dbd9faa9b7df56362b8ebcaa52adb06b498c050d2f4e32f90a7a8b" checksum = "ad0f048c97dbd9faa9b7df56362b8ebcaa52adb06b498c050d2f4e32f90a7a8b"
dependencies = [ dependencies = [
"matchers", "matchers",
"nu-ansi-term 0.46.0", "nu-ansi-term",
"once_cell", "once_cell",
"regex", "regex",
"sharded-slab", "sharded-slab",
@ -3653,11 +3600,11 @@ dependencies = [
[[package]] [[package]]
name = "tracing-tree" name = "tracing-tree"
version = "0.3.0" version = "0.2.5"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "65139ecd2c3f6484c3b99bc01c77afe21e95473630747c7aca525e78b0666675" checksum = "2ec6adcab41b1391b08a308cc6302b79f8095d1673f6947c2dc65ffb028b0b2d"
dependencies = [ dependencies = [
"nu-ansi-term 0.49.0", "nu-ansi-term",
"tracing-core", "tracing-core",
"tracing-log", "tracing-log",
"tracing-subscriber", "tracing-subscriber",
@ -3679,28 +3626,28 @@ dependencies = [
[[package]] [[package]]
name = "try-lock" name = "try-lock"
version = "0.2.5" version = "0.2.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e421abadd41a4225275504ea4d6566923418b7f05506fbc9c0fe86ba7396114b" checksum = "3528ecfd12c466c6f163363caf2d02a71161dd5e1cc6ae7b34207ea2d42d81ed"
[[package]] [[package]]
name = "typed-builder" name = "typed-builder"
version = "0.18.0" version = "0.15.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e47c0496149861b7c95198088cbf36645016b1a0734cf350c50e2a38e070f38a" checksum = "7fe83c85a85875e8c4cb9ce4a890f05b23d38cd0d47647db7895d3d2a79566d2"
dependencies = [ dependencies = [
"typed-builder-macro", "typed-builder-macro",
] ]
[[package]] [[package]]
name = "typed-builder-macro" name = "typed-builder-macro"
version = "0.18.0" version = "0.15.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "982ee4197351b5c9782847ef5ec1fdcaf50503fb19d68f9771adae314e72b492" checksum = "29a3151c41d0b13e3d011f98adc24434560ef06673a155a6c7f66b9879eecce2"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
"syn 2.0.41", "syn 2.0.39",
] ]
[[package]] [[package]]
@ -3711,9 +3658,9 @@ checksum = "42ff0bf0c66b8238c6f3b578df37d0b7848e55df8577b3f74f92a69acceeb825"
[[package]] [[package]]
name = "unicode-bidi" name = "unicode-bidi"
version = "0.3.14" version = "0.3.13"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6f2528f27a9eb2b21e69c95319b30bd0efd85d09c379741b0f78ea1d86be2416" checksum = "92888ba5573ff080736b3648696b70cafad7d250551175acbaa4e0385b3e1460"
[[package]] [[package]]
name = "unicode-ident" name = "unicode-ident"
@ -3799,7 +3746,6 @@ version = "1.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5e395fcf16a7a3d8127ec99782007af141946b4795001f876d54fb0d55978560" checksum = "5e395fcf16a7a3d8127ec99782007af141946b4795001f876d54fb0d55978560"
dependencies = [ dependencies = [
"atomic",
"getrandom", "getrandom",
"serde", "serde",
] ]
@ -3858,7 +3804,7 @@ dependencies = [
"once_cell", "once_cell",
"proc-macro2", "proc-macro2",
"quote", "quote",
"syn 2.0.41", "syn 2.0.39",
"wasm-bindgen-shared", "wasm-bindgen-shared",
] ]
@ -3892,7 +3838,7 @@ checksum = "f0eb82fcb7930ae6219a7ecfd55b217f5f0893484b7a13022ebb2b2bf20b5283"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
"syn 2.0.41", "syn 2.0.39",
"wasm-bindgen-backend", "wasm-bindgen-backend",
"wasm-bindgen-shared", "wasm-bindgen-shared",
] ]
@ -4299,22 +4245,22 @@ checksum = "09041cd90cf85f7f8b2df60c646f853b7f535ce68f85244eb6731cf89fa498ec"
[[package]] [[package]]
name = "zerocopy" name = "zerocopy"
version = "0.7.31" version = "0.7.28"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1c4061bedbb353041c12f413700357bec76df2c7e2ca8e4df8bac24c6bf68e3d" checksum = "7d6f15f7ade05d2a4935e34a457b936c23dc70a05cc1d97133dc99e7a3fe0f0e"
dependencies = [ dependencies = [
"zerocopy-derive", "zerocopy-derive",
] ]
[[package]] [[package]]
name = "zerocopy-derive" name = "zerocopy-derive"
version = "0.7.31" version = "0.7.28"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b3c129550b3e6de3fd0ba67ba5c81818f9805e58b8d7fee80a3a59d2c9fc601a" checksum = "dbbad221e3f78500350ecbd7dfa4e63ef945c05f4c61cb7f4d3f84cd0bba649b"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
"syn 2.0.41", "syn 2.0.39",
] ]
[[package]] [[package]]
@ -4334,5 +4280,5 @@ checksum = "ce36e65b0d2999d2aafac989fb249189a141aee1f53c612c1f37d72631959f69"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
"syn 2.0.41", "syn 2.0.39",
] ]

View File

@ -29,20 +29,24 @@ time = { version = "0.3", features = [
] } ] }
clap = { version = "4.0.18", features = ["derive"] } clap = { version = "4.0.18", features = ["derive"] }
config = { version = "0.13", default-features = false, features = ["toml"] } config = { version = "0.13", default-features = false, features = ["toml"] }
directories = "5.0.1" directories = "4"
eyre = "0.6" eyre = "0.6"
fs-err = "2.9" fs-err = "2.9"
interim = { version = "0.1.0", features = ["time"] } interim = { version = "0.1.0", features = ["time"] }
itertools = "0.12.0" itertools = "0.11.0"
rand = { version = "0.8.5", features = ["std"] } rand = { version = "0.8.5", features = ["std"] }
semver = "1.0.20" semver = "1.0.20"
serde = { version = "1.0.193", features = ["derive"] } # https://github.com/serde-rs/serde/issues/2538
# I don't trust dtolnay with our user's builds. especially as we
# have things like encryption keys
serde = { version = "1.0.145, <=1.0.171", features = ["derive"] }
serde_json = "1.0.108" serde_json = "1.0.108"
tokio = { version = "1", features = ["full"] } tokio = { version = "1", features = ["full"] }
uuid = { version = "1.3", features = ["v4", "v7", "serde"] } uuid = { version = "1.3", features = ["v4", "serde"] }
whoami = "1.1.2" whoami = "1.1.2"
typed-builder = "0.18.0" typed-builder = "0.15.0"
pretty_assertions = "1.3.0" pretty_assertions = "1.3.0"
thiserror = "1.0"
[workspace.dependencies.reqwest] [workspace.dependencies.reqwest]
version = "0.11" version = "0.11"

View File

@ -1,4 +1,4 @@
FROM lukemathwalker/cargo-chef:latest-rust-1.74.1-buster AS chef FROM lukemathwalker/cargo-chef:latest-rust-1.74.0-buster AS chef
WORKDIR app WORKDIR app
FROM chef AS planner FROM chef AS planner

View File

@ -1,7 +1,7 @@
#<p align="center"> <p align="center">
<picture> <picture>
<source media="(prefers-color-scheme: dark)" srcset="https://github.com/atuinsh/atuin/assets/53315310/13216a1d-1ac0-4c99-b0eb-d88290fe0efd"> <source media="(prefers-color-scheme: dark)" srcset="https://github.com/atuinsh/atuin/assets/53315310/13216a1d-1ac0-4c99-b0eb-d88290fe0efd">
<img alt="Text changing depending on mode. Light: 'So light!' Dark: 'So dark!'" src="https://github.com/atuinsh/atuin/assets/53315310/08bc86d4-a781-4aaa-8d7e-478ae6bcd129"> <img alt="Atuin logo" src="https://github.com/atuinsh/atuin/assets/53315310/08bc86d4-a781-4aaa-8d7e-478ae6bcd129">
</picture> </picture>
</p> </p>
@ -19,7 +19,6 @@
<a href="https://discord.gg/Fq8bJSKPHh"><img src="https://img.shields.io/discord/954121165239115808" /></a> <a href="https://discord.gg/Fq8bJSKPHh"><img src="https://img.shields.io/discord/954121165239115808" /></a>
<a rel="me" href="https://hachyderm.io/@atuin"><img src="https://img.shields.io/mastodon/follow/109944632283122560?domain=https%3A%2F%2Fhachyderm.io&style=social"/></a> <a rel="me" href="https://hachyderm.io/@atuin"><img src="https://img.shields.io/mastodon/follow/109944632283122560?domain=https%3A%2F%2Fhachyderm.io&style=social"/></a>
<a href="https://twitter.com/atuinsh"><img src="https://img.shields.io/twitter/follow/atuinsh?style=social" /></a> <a href="https://twitter.com/atuinsh"><img src="https://img.shields.io/twitter/follow/atuinsh?style=social" /></a>
<a href="https://actuated.dev/"><img alt="Arm CI sponsored by Actuated" src="https://docs.actuated.dev/images/actuated-badge.png" width="120px"></img></a>
</p> </p>
@ -108,11 +107,7 @@ This will sign you up for the default sync server, hosted by me. Everything is e
Read more below for offline-only usage, or for hosting your own server. Read more below for offline-only usage, or for hosting your own server.
``` ```
# bash/zsh/etc bash <(curl https://raw.githubusercontent.com/atuinsh/atuin/main/install.sh)
bash <(curl --proto '=https' --tlsv1.2 -sSf https://setup.atuin.sh)
# fish
bash (curl --proto '=https' --tlsv1.2 -sSf https://setup.atuin.sh | psub)
atuin register -u <USERNAME> -e <EMAIL> atuin register -u <USERNAME> -e <EMAIL>
atuin import auto atuin import auto
@ -217,16 +212,6 @@ pacman -S atuin
And then follow [the shell setup](#shell-plugin) And then follow [the shell setup](#shell-plugin)
### Xbps
Atuin is available in the Void Linux [repository](https://github.com/void-linux/void-packages/tree/master/srcpkgs/atuin):
```
sudo xbps-install atuin
```
And then follow [the shell setup](#shell-plugin)
### Termux ### Termux
Atuin is available in the Termux package repository: Atuin is available in the Termux package repository:

View File

@ -48,6 +48,7 @@ rmp = { version = "0.8.11" }
typed-builder = { workspace = true } typed-builder = { workspace = true }
tokio = { workspace = true } tokio = { workspace = true }
semver = { workspace = true } semver = { workspace = true }
thiserror = { workspace = true }
futures = "0.3" futures = "0.3"
crypto_secretbox = "0.1.1" crypto_secretbox = "0.1.1"
generic-array = { version = "0.14", features = ["serde"] } generic-array = { version = "0.14", features = ["serde"] }

View File

@ -39,7 +39,7 @@
# filter_mode = "global" # filter_mode = "global"
## With workspace filtering enabled, Atuin will filter for commands executed ## With workspace filtering enabled, Atuin will filter for commands executed
## in any directory within a git repository tree (default: false) ## in any directory within a git repositiry tree (default: false)
# workspaces = false # workspaces = false
## which filter mode to use when atuin is invoked from a shell up-key binding ## which filter mode to use when atuin is invoked from a shell up-key binding

View File

@ -0,0 +1,15 @@
-- Add migration script here
create table if not exists store (
id text primary key, -- globally unique ID
idx integer, -- incrementing integer ID unique per (host, tag)
host text not null, -- references the host row
tag text not null,
timestamp integer not null,
version text not null,
data blob not null,
cek blob not null
);
create unique index record_uniq ON store(host, tag, idx);

View File

@ -5,19 +5,16 @@ use std::time::Duration;
use eyre::{bail, Result}; use eyre::{bail, Result};
use reqwest::{ use reqwest::{
header::{HeaderMap, AUTHORIZATION, USER_AGENT}, header::{HeaderMap, AUTHORIZATION, USER_AGENT},
Response, StatusCode, Url, StatusCode, Url,
}; };
use atuin_common::record::{EncryptedData, HostId, Record, RecordIdx};
use atuin_common::{ use atuin_common::{
api::{ api::{
AddHistoryRequest, CountResponse, DeleteHistoryRequest, ErrorResponse, IndexResponse, AddHistoryRequest, CountResponse, DeleteHistoryRequest, ErrorResponse, IndexResponse,
LoginRequest, LoginResponse, RegisterResponse, StatusResponse, SyncHistoryResponse, LoginRequest, LoginResponse, RegisterResponse, StatusResponse, SyncHistoryResponse,
}, },
record::RecordIndex, record::RecordStatus,
};
use atuin_common::{
api::{ATUIN_CARGO_VERSION, ATUIN_HEADER_VERSION, ATUIN_VERSION},
record::{EncryptedData, HostId, Record, RecordId},
}; };
use semver::Version; use semver::Version;
use time::format_description::well_known::Rfc3339; use time::format_description::well_known::Rfc3339;
@ -55,15 +52,10 @@ pub async fn register(
let resp = client let resp = client
.post(url) .post(url)
.header(USER_AGENT, APP_USER_AGENT) .header(USER_AGENT, APP_USER_AGENT)
.header(ATUIN_HEADER_VERSION, ATUIN_CARGO_VERSION)
.json(&map) .json(&map)
.send() .send()
.await?; .await?;
if !ensure_version(&resp)? {
bail!("could not register user due to version mismatch");
}
if !resp.status().is_success() { if !resp.status().is_success() {
let error = resp.json::<ErrorResponse>().await?; let error = resp.json::<ErrorResponse>().await?;
bail!("failed to register user: {}", error.reason); bail!("failed to register user: {}", error.reason);
@ -84,10 +76,6 @@ pub async fn login(address: &str, req: LoginRequest) -> Result<LoginResponse> {
.send() .send()
.await?; .await?;
if !ensure_version(&resp)? {
bail!("could not login due to version mismatch");
}
if resp.status() != reqwest::StatusCode::OK { if resp.status() != reqwest::StatusCode::OK {
let error = resp.json::<ErrorResponse>().await?; let error = resp.json::<ErrorResponse>().await?;
bail!("invalid login details: {}", error.reason); bail!("invalid login details: {}", error.reason);
@ -118,31 +106,6 @@ pub async fn latest_version() -> Result<Version> {
Ok(version) Ok(version)
} }
pub fn ensure_version(response: &Response) -> Result<bool> {
let version = response.headers().get(ATUIN_HEADER_VERSION);
let version = if let Some(version) = version {
match version.to_str() {
Ok(v) => Version::parse(v),
Err(e) => bail!("failed to parse server version: {:?}", e),
}
} else {
// if there is no version header, then the newest this server can possibly be is 17.1.0
Version::parse("17.1.0")
}?;
// If the client is newer than the server
if version.major < ATUIN_VERSION.major {
println!("Atuin version mismatch! In order to successfully sync, the server needs to run a newer version of Atuin");
println!("Client: {}", ATUIN_CARGO_VERSION);
println!("Server: {}", version);
return Ok(false);
}
Ok(true)
}
impl<'a> Client<'a> { impl<'a> Client<'a> {
pub fn new( pub fn new(
sync_addr: &'a str, sync_addr: &'a str,
@ -153,9 +116,6 @@ impl<'a> Client<'a> {
let mut headers = HeaderMap::new(); let mut headers = HeaderMap::new();
headers.insert(AUTHORIZATION, format!("Token {session_token}").parse()?); headers.insert(AUTHORIZATION, format!("Token {session_token}").parse()?);
// used for semver server check
headers.insert(ATUIN_HEADER_VERSION, ATUIN_CARGO_VERSION.parse()?);
Ok(Client { Ok(Client {
sync_addr, sync_addr,
client: reqwest::Client::builder() client: reqwest::Client::builder()
@ -173,10 +133,6 @@ impl<'a> Client<'a> {
let resp = self.client.get(url).send().await?; let resp = self.client.get(url).send().await?;
if !ensure_version(&resp)? {
bail!("could not sync due to version mismatch");
}
if resp.status() != StatusCode::OK { if resp.status() != StatusCode::OK {
bail!("failed to get count (are you logged in?)"); bail!("failed to get count (are you logged in?)");
} }
@ -192,10 +148,6 @@ impl<'a> Client<'a> {
let resp = self.client.get(url).send().await?; let resp = self.client.get(url).send().await?;
if !ensure_version(&resp)? {
bail!("could not sync due to version mismatch");
}
if resp.status() != StatusCode::OK { if resp.status() != StatusCode::OK {
bail!("failed to get status (are you logged in?)"); bail!("failed to get status (are you logged in?)");
} }
@ -279,24 +231,22 @@ impl<'a> Client<'a> {
&self, &self,
host: HostId, host: HostId,
tag: String, tag: String,
start: Option<RecordId>, start: RecordIdx,
count: u64, count: u64,
) -> Result<Vec<Record<EncryptedData>>> { ) -> Result<Vec<Record<EncryptedData>>> {
let url = format!( debug!(
"{}/record/next?host={}&tag={}&count={}", "fetching record/s from host {}/{}/{}",
self.sync_addr, host.0, tag, count host.0.to_string(),
tag,
start
); );
let mut url = Url::parse(url.as_str())?;
if let Some(start) = start { let url = format!(
url.set_query(Some( "{}/record/next?host={}&tag={}&count={}&start={}",
format!( self.sync_addr, host.0, tag, count, start
"host={}&tag={}&count={}&start={}", );
host.0, tag, count, start.0
) let url = Url::parse(url.as_str())?;
.as_str(),
));
}
let resp = self.client.get(url).send().await?; let resp = self.client.get(url).send().await?;
@ -305,18 +255,15 @@ impl<'a> Client<'a> {
Ok(records) Ok(records)
} }
pub async fn record_index(&self) -> Result<RecordIndex> { pub async fn record_status(&self) -> Result<RecordStatus> {
let url = format!("{}/record", self.sync_addr); let url = format!("{}/record", self.sync_addr);
let url = Url::parse(url.as_str())?; let url = Url::parse(url.as_str())?;
let resp = self.client.get(url).send().await?; let resp = self.client.get(url).send().await?;
if !ensure_version(&resp)? {
bail!("could not sync records due to version mismatch");
}
let index = resp.json().await?; let index = resp.json().await?;
debug!("got remote index {:?}", index);
Ok(index) Ok(index)
} }

View File

@ -1,12 +1,21 @@
use rmp::decode::ValueReadError;
use rmp::{decode::Bytes, Marker};
use std::env; use std::env;
use atuin_common::record::DecryptedData;
use atuin_common::utils::uuid_v7; use atuin_common::utils::uuid_v7;
use eyre::{bail, eyre, Result};
use regex::RegexSet; use regex::RegexSet;
use crate::{secrets::SECRET_PATTERNS, settings::Settings}; use crate::{secrets::SECRET_PATTERNS, settings::Settings};
use time::OffsetDateTime; use time::OffsetDateTime;
mod builder; mod builder;
pub mod store;
const HISTORY_VERSION: &str = "v0";
const _HISTORY_TAG: &str = "history";
/// Client-side history entry. /// Client-side history entry.
/// ///
@ -81,6 +90,108 @@ impl History {
} }
} }
pub fn serialize(&self) -> Result<DecryptedData> {
// This is pretty much the same as what we used for the old history, with one difference -
// it uses integers for timestamps rather than a string format.
use rmp::encode;
let mut output = vec![];
// write the version
encode::write_u16(&mut output, 0)?;
// INFO: ensure this is updated when adding new fields
encode::write_array_len(&mut output, 9)?;
encode::write_str(&mut output, &self.id)?;
encode::write_u64(&mut output, self.timestamp.unix_timestamp_nanos() as u64)?;
encode::write_sint(&mut output, self.duration)?;
encode::write_sint(&mut output, self.exit)?;
encode::write_str(&mut output, &self.command)?;
encode::write_str(&mut output, &self.cwd)?;
encode::write_str(&mut output, &self.session)?;
encode::write_str(&mut output, &self.hostname)?;
match self.deleted_at {
Some(d) => encode::write_u64(&mut output, d.unix_timestamp_nanos() as u64)?,
None => encode::write_nil(&mut output)?,
}
Ok(DecryptedData(output))
}
fn deserialize_v0(bytes: &[u8]) -> Result<History> {
use rmp::decode;
fn error_report<E: std::fmt::Debug>(err: E) -> eyre::Report {
eyre!("{err:?}")
}
let mut bytes = Bytes::new(bytes);
let version = decode::read_u16(&mut bytes).map_err(error_report)?;
if version != 0 {
bail!("expected decoding v0 record, found v{version}");
}
let nfields = decode::read_array_len(&mut bytes).map_err(error_report)?;
if nfields != 9 {
bail!("cannot decrypt history from a different version of Atuin");
}
let bytes = bytes.remaining_slice();
let (id, bytes) = decode::read_str_from_slice(bytes).map_err(error_report)?;
let mut bytes = Bytes::new(bytes);
let timestamp = decode::read_u64(&mut bytes).map_err(error_report)?;
let duration = decode::read_int(&mut bytes).map_err(error_report)?;
let exit = decode::read_int(&mut bytes).map_err(error_report)?;
let bytes = bytes.remaining_slice();
let (command, bytes) = decode::read_str_from_slice(bytes).map_err(error_report)?;
let (cwd, bytes) = decode::read_str_from_slice(bytes).map_err(error_report)?;
let (session, bytes) = decode::read_str_from_slice(bytes).map_err(error_report)?;
let (hostname, bytes) = decode::read_str_from_slice(bytes).map_err(error_report)?;
// if we have more fields, try and get the deleted_at
let mut bytes = Bytes::new(bytes);
let (deleted_at, bytes) = match decode::read_u64(&mut bytes) {
Ok(unix) => (Some(unix), bytes.remaining_slice()),
// we accept null here
Err(ValueReadError::TypeMismatch(Marker::Null)) => (None, bytes.remaining_slice()),
Err(err) => return Err(error_report(err)),
};
if !bytes.is_empty() {
bail!("trailing bytes in encoded history. malformed")
}
Ok(History {
id: id.to_owned(),
timestamp: OffsetDateTime::from_unix_timestamp_nanos(timestamp as i128)?,
duration,
exit,
command: command.to_owned(),
cwd: cwd.to_owned(),
session: session.to_owned(),
hostname: hostname.to_owned(),
deleted_at: deleted_at
.map(|t| OffsetDateTime::from_unix_timestamp_nanos(t as i128))
.transpose()?,
})
}
pub fn deserialize(bytes: &[u8], version: &str) -> Result<History> {
match version {
HISTORY_VERSION => Self::deserialize_v0(bytes),
_ => bail!("unknown version {version:?}"),
}
}
/// Builder for a history entry that is imported from shell history. /// Builder for a history entry that is imported from shell history.
/// ///
/// The only two required fields are `timestamp` and `command`. /// The only two required fields are `timestamp` and `command`.
@ -202,8 +313,9 @@ impl History {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use regex::RegexSet; use regex::RegexSet;
use time::macros::datetime;
use crate::settings::Settings; use crate::{history::HISTORY_VERSION, settings::Settings};
use super::History; use super::History;
@ -274,4 +386,100 @@ mod tests {
assert!(stripe_key.should_save(&settings)); assert!(stripe_key.should_save(&settings));
} }
#[test]
fn test_serialize_deserialize() {
let bytes = [
205, 0, 0, 153, 217, 32, 54, 54, 100, 49, 54, 99, 98, 101, 101, 55, 99, 100, 52, 55,
53, 51, 56, 101, 53, 99, 53, 98, 56, 98, 52, 52, 101, 57, 48, 48, 54, 101, 207, 23, 99,
98, 117, 24, 210, 246, 128, 206, 2, 238, 210, 240, 0, 170, 103, 105, 116, 32, 115, 116,
97, 116, 117, 115, 217, 42, 47, 85, 115, 101, 114, 115, 47, 99, 111, 110, 114, 97, 100,
46, 108, 117, 100, 103, 97, 116, 101, 47, 68, 111, 99, 117, 109, 101, 110, 116, 115,
47, 99, 111, 100, 101, 47, 97, 116, 117, 105, 110, 217, 32, 98, 57, 55, 100, 57, 97,
51, 48, 54, 102, 50, 55, 52, 52, 55, 51, 97, 50, 48, 51, 100, 50, 101, 98, 97, 52, 49,
102, 57, 52, 53, 55, 187, 102, 118, 102, 103, 57, 51, 54, 99, 48, 107, 112, 102, 58,
99, 111, 110, 114, 97, 100, 46, 108, 117, 100, 103, 97, 116, 101, 192,
];
let history = History {
id: "66d16cbee7cd47538e5c5b8b44e9006e".to_owned(),
timestamp: datetime!(2023-05-28 18:35:40.633872 +00:00),
duration: 49206000,
exit: 0,
command: "git status".to_owned(),
cwd: "/Users/conrad.ludgate/Documents/code/atuin".to_owned(),
session: "b97d9a306f274473a203d2eba41f9457".to_owned(),
hostname: "fvfg936c0kpf:conrad.ludgate".to_owned(),
deleted_at: None,
};
let serialized = history.serialize().expect("failed to serialize history");
assert_eq!(serialized.0, bytes);
let deserialized = History::deserialize(&serialized.0, HISTORY_VERSION)
.expect("failed to deserialize history");
assert_eq!(history, deserialized);
// test the snapshot too
let deserialized =
History::deserialize(&bytes, HISTORY_VERSION).expect("failed to deserialize history");
assert_eq!(history, deserialized);
}
#[test]
fn test_serialize_deserialize_deleted() {
let history = History {
id: "66d16cbee7cd47538e5c5b8b44e9006e".to_owned(),
timestamp: datetime!(2023-05-28 18:35:40.633872 +00:00),
duration: 49206000,
exit: 0,
command: "git status".to_owned(),
cwd: "/Users/conrad.ludgate/Documents/code/atuin".to_owned(),
session: "b97d9a306f274473a203d2eba41f9457".to_owned(),
hostname: "fvfg936c0kpf:conrad.ludgate".to_owned(),
deleted_at: Some(datetime!(2023-11-19 20:18 +00:00)),
};
let serialized = history.serialize().expect("failed to serialize history");
let deserialized = History::deserialize(&serialized.0, HISTORY_VERSION)
.expect("failed to deserialize history");
assert_eq!(history, deserialized);
}
#[test]
fn test_serialize_deserialize_version() {
// v0
let bytes_v0 = [
205, 0, 0, 153, 217, 32, 54, 54, 100, 49, 54, 99, 98, 101, 101, 55, 99, 100, 52, 55,
53, 51, 56, 101, 53, 99, 53, 98, 56, 98, 52, 52, 101, 57, 48, 48, 54, 101, 207, 23, 99,
98, 117, 24, 210, 246, 128, 206, 2, 238, 210, 240, 0, 170, 103, 105, 116, 32, 115, 116,
97, 116, 117, 115, 217, 42, 47, 85, 115, 101, 114, 115, 47, 99, 111, 110, 114, 97, 100,
46, 108, 117, 100, 103, 97, 116, 101, 47, 68, 111, 99, 117, 109, 101, 110, 116, 115,
47, 99, 111, 100, 101, 47, 97, 116, 117, 105, 110, 217, 32, 98, 57, 55, 100, 57, 97,
51, 48, 54, 102, 50, 55, 52, 52, 55, 51, 97, 50, 48, 51, 100, 50, 101, 98, 97, 52, 49,
102, 57, 52, 53, 55, 187, 102, 118, 102, 103, 57, 51, 54, 99, 48, 107, 112, 102, 58,
99, 111, 110, 114, 97, 100, 46, 108, 117, 100, 103, 97, 116, 101, 192,
];
// some other version
let bytes_v1 = [
205, 1, 0, 153, 217, 32, 54, 54, 100, 49, 54, 99, 98, 101, 101, 55, 99, 100, 52, 55,
53, 51, 56, 101, 53, 99, 53, 98, 56, 98, 52, 52, 101, 57, 48, 48, 54, 101, 207, 23, 99,
98, 117, 24, 210, 246, 128, 206, 2, 238, 210, 240, 0, 170, 103, 105, 116, 32, 115, 116,
97, 116, 117, 115, 217, 42, 47, 85, 115, 101, 114, 115, 47, 99, 111, 110, 114, 97, 100,
46, 108, 117, 100, 103, 97, 116, 101, 47, 68, 111, 99, 117, 109, 101, 110, 116, 115,
47, 99, 111, 100, 101, 47, 97, 116, 117, 105, 110, 217, 32, 98, 57, 55, 100, 57, 97,
51, 48, 54, 102, 50, 55, 52, 52, 55, 51, 97, 50, 48, 51, 100, 50, 101, 98, 97, 52, 49,
102, 57, 52, 53, 55, 187, 102, 118, 102, 103, 57, 51, 54, 99, 48, 107, 112, 102, 58,
99, 111, 110, 114, 97, 100, 46, 108, 117, 100, 103, 97, 116, 101, 192,
];
let deserialized = History::deserialize(&bytes_v0, HISTORY_VERSION);
assert!(deserialized.is_ok());
let deserialized = History::deserialize(&bytes_v1, HISTORY_VERSION);
assert!(deserialized.is_err());
}
} }

View File

@ -0,0 +1,52 @@
use eyre::Result;
use crate::record::sqlite_store::SqliteStore;
use atuin_common::record::HostId;
use super::History;
#[derive(Debug)]
pub struct HistoryStore {
pub store: SqliteStore,
pub host_id: HostId,
pub encryption_key: [u8; 32],
}
impl HistoryStore {
pub fn new(store: SqliteStore, host_id: HostId, encryption_key: [u8; 32]) -> Self {
HistoryStore {
store,
host_id,
encryption_key,
}
}
pub async fn push(&self, _history: &History) -> Result<()> {
Ok(())
/*
* will continue this in another PR
*
*
let bytes = history.serialize()?;
let id = self
.store
.last(self.host_id, HISTORY_TAG)
.await?
.map_or(0, |p| p.idx + 1);
let record = Record::builder()
.host(Host::new(self.host_id))
.version(HISTORY_VERSION.to_string())
.tag(HISTORY_TAG.to_string())
.idx(id)
.data(bytes)
.build();
self.store
.push(&record.encrypt::<PASETO_V4>(&self.encryption_key))
.await?;
Ok(())
*/
}
}

View File

@ -1,6 +1,6 @@
use std::collections::BTreeMap; use std::collections::BTreeMap;
use atuin_common::record::{DecryptedData, HostId}; use atuin_common::record::{DecryptedData, Host, HostId};
use eyre::{bail, ensure, eyre, Result}; use eyre::{bail, ensure, eyre, Result};
use serde::Deserialize; use serde::Deserialize;
@ -111,13 +111,16 @@ impl KvStore {
let bytes = record.serialize()?; let bytes = record.serialize()?;
let parent = store.tail(host_id, KV_TAG).await?.map(|entry| entry.id); let idx = store
.last(host_id, KV_TAG)
.await?
.map_or(0, |entry| entry.idx + 1);
let record = atuin_common::record::Record::builder() let record = atuin_common::record::Record::builder()
.host(host_id) .host(Host::new(host_id))
.version(KV_VERSION.to_string()) .version(KV_VERSION.to_string())
.tag(KV_TAG.to_string()) .tag(KV_TAG.to_string())
.parent(parent) .idx(idx)
.data(bytes) .data(bytes)
.build(); .build();
@ -137,43 +140,18 @@ impl KvStore {
namespace: &str, namespace: &str,
key: &str, key: &str,
) -> Result<Option<KvRecord>> { ) -> Result<Option<KvRecord>> {
// Currently, this is O(n). When we have an actual KV store, it can be better // TODO: don't rebuild every time...
// Just a poc for now! let map = self.build_kv(store, encryption_key).await?;
// iterate records to find the value we want let res = map.get(namespace);
// start at the end, so we get the most recent version
let tails = store.tag_tails(KV_TAG).await?;
if tails.is_empty() { if let Some(ns) = res {
return Ok(None); let value = ns.get(key);
Ok(value.cloned())
} else {
Ok(None)
} }
// first, decide on a record.
// try getting the newest first
// we always need a way of deciding the "winner" of a write
// TODO(ellie): something better than last-write-wins, what if two write at the same time?
let mut record = tails.iter().max_by_key(|r| r.timestamp).unwrap().clone();
loop {
let decrypted = match record.version.as_str() {
KV_VERSION => record.decrypt::<PASETO_V4>(encryption_key)?,
version => bail!("unknown version {version:?}"),
};
let kv = KvRecord::deserialize(&decrypted.data, &decrypted.version)?;
if kv.key == key && kv.namespace == namespace {
return Ok(Some(kv));
}
if let Some(parent) = decrypted.parent {
record = store.get(parent).await?;
} else {
break;
}
}
// if we get here, then... we didn't find the record with that key :(
Ok(None)
} }
// Build a kv map out of the linked list kv store // Build a kv map out of the linked list kv store
@ -184,32 +162,31 @@ impl KvStore {
&self, &self,
store: &impl Store, store: &impl Store,
encryption_key: &[u8; 32], encryption_key: &[u8; 32],
) -> Result<BTreeMap<String, BTreeMap<String, String>>> { ) -> Result<BTreeMap<String, BTreeMap<String, KvRecord>>> {
let mut map = BTreeMap::new(); let mut map = BTreeMap::new();
let tails = store.tag_tails(KV_TAG).await?;
if tails.is_empty() { // TODO: maybe don't load the entire tag into memory to build the kv
return Ok(map); // we can be smart about it and only load values since the last build
} // or, iterate/paginate
let tagged = store.all_tagged(KV_TAG).await?;
let mut record = tails.iter().max_by_key(|r| r.timestamp).unwrap().clone(); // iterate through all tags and play each KV record at a time
// this is "last write wins"
loop { // probably good enough for now, but revisit in future
for record in tagged {
let decrypted = match record.version.as_str() { let decrypted = match record.version.as_str() {
KV_VERSION => record.decrypt::<PASETO_V4>(encryption_key)?, KV_VERSION => record.decrypt::<PASETO_V4>(encryption_key)?,
version => bail!("unknown version {version:?}"), version => bail!("unknown version {version:?}"),
}; };
let kv = KvRecord::deserialize(&decrypted.data, &decrypted.version)?; let kv = KvRecord::deserialize(&decrypted.data, KV_VERSION)?;
println!("{:?}", kv);
let ns = map.entry(kv.namespace).or_insert_with(BTreeMap::new); let ns = map
ns.entry(kv.key).or_insert_with(|| kv.value); .entry(kv.namespace.clone())
.or_insert_with(BTreeMap::new);
if let Some(parent) = decrypted.parent { ns.insert(kv.key.clone(), kv);
record = store.get(parent).await?;
} else {
break;
}
} }
Ok(map) Ok(map)
@ -261,19 +238,27 @@ mod tests {
let map = kv.build_kv(&store, &key).await.unwrap(); let map = kv.build_kv(&store, &key).await.unwrap();
assert_eq!( assert_eq!(
map.get("test-kv") *map.get("test-kv")
.expect("map namespace not set") .expect("map namespace not set")
.get("foo") .get("foo")
.expect("map key not set"), .expect("map key not set"),
"bar" KvRecord {
namespace: String::from("test-kv"),
key: String::from("foo"),
value: String::from("bar")
}
); );
assert_eq!( assert_eq!(
map.get("test-kv") *map.get("test-kv")
.expect("map namespace not set") .expect("map namespace not set")
.get("1") .get("1")
.expect("map key not set"), .expect("map key not set"),
"2" KvRecord {
namespace: String::from("test-kv"),
key: String::from("1"),
value: String::from("2")
}
); );
} }
} }

View File

@ -1,5 +1,5 @@
use atuin_common::record::{ use atuin_common::record::{
AdditionalData, DecryptedData, EncryptedData, Encryption, HostId, RecordId, AdditionalData, DecryptedData, EncryptedData, Encryption, HostId, RecordId, RecordIdx,
}; };
use base64::{engine::general_purpose, Engine}; use base64::{engine::general_purpose, Engine};
use eyre::{ensure, Context, Result}; use eyre::{ensure, Context, Result};
@ -170,10 +170,10 @@ struct AtuinFooter {
#[derive(Debug, Copy, Clone, Serialize)] #[derive(Debug, Copy, Clone, Serialize)]
struct Assertions<'a> { struct Assertions<'a> {
id: &'a RecordId, id: &'a RecordId,
idx: &'a RecordIdx,
version: &'a str, version: &'a str,
tag: &'a str, tag: &'a str,
host: &'a HostId, host: &'a HostId,
parent: Option<&'a RecordId>,
} }
impl<'a> From<AdditionalData<'a>> for Assertions<'a> { impl<'a> From<AdditionalData<'a>> for Assertions<'a> {
@ -183,7 +183,7 @@ impl<'a> From<AdditionalData<'a>> for Assertions<'a> {
version: ad.version, version: ad.version,
tag: ad.tag, tag: ad.tag,
host: ad.host, host: ad.host,
parent: ad.parent, idx: ad.idx,
} }
} }
} }
@ -196,7 +196,10 @@ impl Assertions<'_> {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use atuin_common::{record::Record, utils::uuid_v7}; use atuin_common::{
record::{Host, Record},
utils::uuid_v7,
};
use super::*; use super::*;
@ -209,7 +212,7 @@ mod tests {
version: "v0", version: "v0",
tag: "kv", tag: "kv",
host: &HostId(uuid_v7()), host: &HostId(uuid_v7()),
parent: None, idx: &0,
}; };
let data = DecryptedData(vec![1, 2, 3, 4]); let data = DecryptedData(vec![1, 2, 3, 4]);
@ -228,7 +231,7 @@ mod tests {
version: "v0", version: "v0",
tag: "kv", tag: "kv",
host: &HostId(uuid_v7()), host: &HostId(uuid_v7()),
parent: None, idx: &0,
}; };
let data = DecryptedData(vec![1, 2, 3, 4]); let data = DecryptedData(vec![1, 2, 3, 4]);
@ -252,7 +255,7 @@ mod tests {
version: "v0", version: "v0",
tag: "kv", tag: "kv",
host: &HostId(uuid_v7()), host: &HostId(uuid_v7()),
parent: None, idx: &0,
}; };
let data = DecryptedData(vec![1, 2, 3, 4]); let data = DecryptedData(vec![1, 2, 3, 4]);
@ -270,7 +273,7 @@ mod tests {
version: "v0", version: "v0",
tag: "kv", tag: "kv",
host: &HostId(uuid_v7()), host: &HostId(uuid_v7()),
parent: None, idx: &0,
}; };
let data = DecryptedData(vec![1, 2, 3, 4]); let data = DecryptedData(vec![1, 2, 3, 4]);
@ -294,7 +297,7 @@ mod tests {
version: "v0", version: "v0",
tag: "kv", tag: "kv",
host: &HostId(uuid_v7()), host: &HostId(uuid_v7()),
parent: None, idx: &0,
}; };
let data = DecryptedData(vec![1, 2, 3, 4]); let data = DecryptedData(vec![1, 2, 3, 4]);
@ -323,9 +326,10 @@ mod tests {
.id(RecordId(uuid_v7())) .id(RecordId(uuid_v7()))
.version("v0".to_owned()) .version("v0".to_owned())
.tag("kv".to_owned()) .tag("kv".to_owned())
.host(HostId(uuid_v7())) .host(Host::new(HostId(uuid_v7())))
.timestamp(1687244806000000) .timestamp(1687244806000000)
.data(DecryptedData(vec![1, 2, 3, 4])) .data(DecryptedData(vec![1, 2, 3, 4]))
.idx(0)
.build(); .build();
let encrypted = record.encrypt::<PASETO_V4>(&key); let encrypted = record.encrypt::<PASETO_V4>(&key);
@ -345,15 +349,16 @@ mod tests {
.id(RecordId(uuid_v7())) .id(RecordId(uuid_v7()))
.version("v0".to_owned()) .version("v0".to_owned())
.tag("kv".to_owned()) .tag("kv".to_owned())
.host(HostId(uuid_v7())) .host(Host::new(HostId(uuid_v7())))
.timestamp(1687244806000000) .timestamp(1687244806000000)
.data(DecryptedData(vec![1, 2, 3, 4])) .data(DecryptedData(vec![1, 2, 3, 4]))
.idx(0)
.build(); .build();
let encrypted = record.encrypt::<PASETO_V4>(&key); let encrypted = record.encrypt::<PASETO_V4>(&key);
let mut enc1 = encrypted.clone(); let mut enc1 = encrypted.clone();
enc1.host = HostId(uuid_v7()); enc1.host = Host::new(HostId(uuid_v7()));
let _ = enc1 let _ = enc1
.decrypt::<PASETO_V4>(&key) .decrypt::<PASETO_V4>(&key)
.expect_err("tampering with the host should result in auth failure"); .expect_err("tampering with the host should result in auth failure");

View File

@ -8,17 +8,20 @@ use std::str::FromStr;
use async_trait::async_trait; use async_trait::async_trait;
use eyre::{eyre, Result}; use eyre::{eyre, Result};
use fs_err as fs; use fs_err as fs;
use futures::TryStreamExt;
use sqlx::{ use sqlx::{
sqlite::{SqliteConnectOptions, SqliteJournalMode, SqlitePool, SqlitePoolOptions, SqliteRow}, sqlite::{SqliteConnectOptions, SqliteJournalMode, SqlitePool, SqlitePoolOptions, SqliteRow},
Row, Row,
}; };
use atuin_common::record::{EncryptedData, HostId, Record, RecordId, RecordIndex}; use atuin_common::record::{
EncryptedData, Host, HostId, Record, RecordId, RecordIdx, RecordStatus,
};
use uuid::Uuid; use uuid::Uuid;
use super::store::Store; use super::store::Store;
#[derive(Debug)]
pub struct SqliteStore { pub struct SqliteStore {
pool: SqlitePool, pool: SqlitePool,
} }
@ -38,6 +41,7 @@ impl SqliteStore {
let opts = SqliteConnectOptions::from_str(path.as_os_str().to_str().unwrap())? let opts = SqliteConnectOptions::from_str(path.as_os_str().to_str().unwrap())?
.journal_mode(SqliteJournalMode::Wal) .journal_mode(SqliteJournalMode::Wal)
.foreign_keys(true)
.create_if_missing(true); .create_if_missing(true);
let pool = SqlitePoolOptions::new().connect_with(opts).await?; let pool = SqlitePoolOptions::new().connect_with(opts).await?;
@ -61,14 +65,14 @@ impl SqliteStore {
) -> Result<()> { ) -> Result<()> {
// In sqlite, we are "limited" to i64. But that is still fine, until 2262. // In sqlite, we are "limited" to i64. But that is still fine, until 2262.
sqlx::query( sqlx::query(
"insert or ignore into records(id, host, tag, timestamp, parent, version, data, cek) "insert or ignore into store(id, idx, host, tag, timestamp, version, data, cek)
values(?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8)", values(?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8)",
) )
.bind(r.id.0.as_simple().to_string()) .bind(r.id.0.as_hyphenated().to_string())
.bind(r.host.0.as_simple().to_string()) .bind(r.idx as i64)
.bind(r.host.id.0.as_hyphenated().to_string())
.bind(r.tag.as_str()) .bind(r.tag.as_str())
.bind(r.timestamp as i64) .bind(r.timestamp as i64)
.bind(r.parent.map(|p| p.0.as_simple().to_string()))
.bind(r.version.as_str()) .bind(r.version.as_str())
.bind(r.data.data.as_str()) .bind(r.data.data.as_str())
.bind(r.data.content_encryption_key.as_str()) .bind(r.data.content_encryption_key.as_str())
@ -79,20 +83,17 @@ impl SqliteStore {
} }
fn query_row(row: SqliteRow) -> Record<EncryptedData> { fn query_row(row: SqliteRow) -> Record<EncryptedData> {
let idx: i64 = row.get("idx");
let timestamp: i64 = row.get("timestamp"); let timestamp: i64 = row.get("timestamp");
// tbh at this point things are pretty fucked so just panic // tbh at this point things are pretty fucked so just panic
let id = Uuid::from_str(row.get("id")).expect("invalid id UUID format in sqlite DB"); let id = Uuid::from_str(row.get("id")).expect("invalid id UUID format in sqlite DB");
let host = Uuid::from_str(row.get("host")).expect("invalid host UUID format in sqlite DB"); let host = Uuid::from_str(row.get("host")).expect("invalid host UUID format in sqlite DB");
let parent: Option<&str> = row.get("parent");
let parent = parent
.map(|parent| Uuid::from_str(parent).expect("invalid parent UUID format in sqlite DB"));
Record { Record {
id: RecordId(id), id: RecordId(id),
host: HostId(host), idx: idx as u64,
parent: parent.map(RecordId), host: Host::new(HostId(host)),
timestamp: timestamp as u64, timestamp: timestamp as u64,
tag: row.get("tag"), tag: row.get("tag"),
version: row.get("version"), version: row.get("version"),
@ -122,8 +123,8 @@ impl Store for SqliteStore {
} }
async fn get(&self, id: RecordId) -> Result<Record<EncryptedData>> { async fn get(&self, id: RecordId) -> Result<Record<EncryptedData>> {
let res = sqlx::query("select * from records where id = ?1") let res = sqlx::query("select * from store where store.id = ?1")
.bind(id.0.as_simple().to_string()) .bind(id.0.as_hyphenated().to_string())
.map(Self::query_row) .map(Self::query_row)
.fetch_one(&self.pool) .fetch_one(&self.pool)
.await?; .await?;
@ -131,20 +132,66 @@ impl Store for SqliteStore {
Ok(res) Ok(res)
} }
async fn len(&self, host: HostId, tag: &str) -> Result<u64> { async fn last(&self, host: HostId, tag: &str) -> Result<Option<Record<EncryptedData>>> {
let res: (i64,) = let res =
sqlx::query_as("select count(1) from records where host = ?1 and tag = ?2") sqlx::query("select * from store where host=?1 and tag=?2 order by idx desc limit 1")
.bind(host.0.as_simple().to_string()) .bind(host.0.as_hyphenated().to_string())
.bind(tag) .bind(tag)
.map(Self::query_row)
.fetch_one(&self.pool) .fetch_one(&self.pool)
.await?; .await;
Ok(res.0 as u64) match res {
Err(sqlx::Error::RowNotFound) => Ok(None),
Err(e) => Err(eyre!("an error occured: {}", e)),
Ok(record) => Ok(Some(record)),
}
} }
async fn next(&self, record: &Record<EncryptedData>) -> Result<Option<Record<EncryptedData>>> { async fn first(&self, host: HostId, tag: &str) -> Result<Option<Record<EncryptedData>>> {
let res = sqlx::query("select * from records where parent = ?1") self.idx(host, tag, 0).await
.bind(record.id.0.as_simple().to_string()) }
async fn len(&self, host: HostId, tag: &str) -> Result<Option<u64>> {
let last = self.last(host, tag).await?;
if let Some(last) = last {
return Ok(Some(last.idx + 1));
}
return Ok(None);
}
async fn next(
&self,
host: HostId,
tag: &str,
idx: RecordIdx,
limit: u64,
) -> Result<Vec<Record<EncryptedData>>> {
let res =
sqlx::query("select * from store where idx >= ?1 and host = ?2 and tag = ?3 limit ?4")
.bind(idx as i64)
.bind(host.0.as_hyphenated().to_string())
.bind(tag)
.bind(limit as i64)
.map(Self::query_row)
.fetch_all(&self.pool)
.await?;
Ok(res)
}
async fn idx(
&self,
host: HostId,
tag: &str,
idx: RecordIdx,
) -> Result<Option<Record<EncryptedData>>> {
let res = sqlx::query("select * from store where idx = ?1 and host = ?2 and tag = ?3")
.bind(idx as i64)
.bind(host.0.as_hyphenated().to_string())
.bind(tag)
.map(Self::query_row) .map(Self::query_row)
.fetch_one(&self.pool) .fetch_one(&self.pool)
.await; .await;
@ -156,58 +203,36 @@ impl Store for SqliteStore {
} }
} }
async fn head(&self, host: HostId, tag: &str) -> Result<Option<Record<EncryptedData>>> { async fn status(&self) -> Result<RecordStatus> {
let res = sqlx::query( let mut status = RecordStatus::new();
"select * from records where host = ?1 and tag = ?2 and parent is null limit 1",
)
.bind(host.0.as_simple().to_string())
.bind(tag)
.map(Self::query_row)
.fetch_optional(&self.pool)
.await?;
Ok(res) let res: Result<Vec<(String, String, i64)>, sqlx::Error> =
sqlx::query_as("select host, tag, max(idx) from store group by host, tag")
.fetch_all(&self.pool)
.await;
let res = match res {
Err(e) => return Err(eyre!("failed to fetch local store status: {}", e)),
Ok(v) => v,
};
for i in res {
let host = HostId(
Uuid::from_str(i.0.as_str()).expect("failed to parse uuid for local store status"),
);
status.set_raw(host, i.1, i.2 as u64);
}
Ok(status)
} }
async fn tail(&self, host: HostId, tag: &str) -> Result<Option<Record<EncryptedData>>> { async fn all_tagged(&self, tag: &str) -> Result<Vec<Record<EncryptedData>>> {
let res = sqlx::query( let res = sqlx::query("select * from store where tag = ?1 order by timestamp asc")
"select * from records rp where tag=?1 and host=?2 and (select count(1) from records where parent=rp.id) = 0;", .bind(tag)
) .map(Self::query_row)
.bind(tag) .fetch_all(&self.pool)
.bind(host.0.as_simple().to_string()) .await?;
.map(Self::query_row)
.fetch_optional(&self.pool)
.await?;
Ok(res)
}
async fn tag_tails(&self, tag: &str) -> Result<Vec<Record<EncryptedData>>> {
let res = sqlx::query(
"select * from records rp where tag=?1 and (select count(1) from records where parent=rp.id) = 0;",
)
.bind(tag)
.map(Self::query_row)
.fetch_all(&self.pool)
.await?;
Ok(res)
}
async fn tail_records(&self) -> Result<RecordIndex> {
let res = sqlx::query(
"select host, tag, id from records rp where (select count(1) from records where parent=rp.id) = 0;",
)
.map(|row: SqliteRow| {
let host: Uuid= Uuid::from_str(row.get("host")).expect("invalid uuid in db host");
let tag: String= row.get("tag");
let id: Uuid= Uuid::from_str(row.get("id")).expect("invalid uuid in db id");
(HostId(host), tag, RecordId(id))
})
.fetch(&self.pool)
.try_collect()
.await?;
Ok(res) Ok(res)
} }
@ -215,7 +240,7 @@ impl Store for SqliteStore {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use atuin_common::record::{EncryptedData, HostId, Record}; use atuin_common::record::{EncryptedData, Host, HostId, Record};
use crate::record::{encryption::PASETO_V4, store::Store}; use crate::record::{encryption::PASETO_V4, store::Store};
@ -223,13 +248,14 @@ mod tests {
fn test_record() -> Record<EncryptedData> { fn test_record() -> Record<EncryptedData> {
Record::builder() Record::builder()
.host(HostId(atuin_common::utils::uuid_v7())) .host(Host::new(HostId(atuin_common::utils::uuid_v7())))
.version("v1".into()) .version("v1".into())
.tag(atuin_common::utils::uuid_v7().simple().to_string()) .tag(atuin_common::utils::uuid_v7().simple().to_string())
.data(EncryptedData { .data(EncryptedData {
data: "1234".into(), data: "1234".into(),
content_encryption_key: "1234".into(), content_encryption_key: "1234".into(),
}) })
.idx(0)
.build() .build()
} }
@ -263,6 +289,42 @@ mod tests {
assert_eq!(record, new_record, "records are not equal"); assert_eq!(record, new_record, "records are not equal");
} }
#[tokio::test]
async fn last() {
let db = SqliteStore::new(":memory:").await.unwrap();
let record = test_record();
db.push(&record).await.unwrap();
let last = db
.last(record.host.id, record.tag.as_str())
.await
.expect("failed to get store len");
assert_eq!(
last.unwrap().id,
record.id,
"expected to get back the same record that was inserted"
);
}
#[tokio::test]
async fn first() {
let db = SqliteStore::new(":memory:").await.unwrap();
let record = test_record();
db.push(&record).await.unwrap();
let first = db
.first(record.host.id, record.tag.as_str())
.await
.expect("failed to get store len");
assert_eq!(
first.unwrap().id,
record.id,
"expected to get back the same record that was inserted"
);
}
#[tokio::test] #[tokio::test]
async fn len() { async fn len() {
let db = SqliteStore::new(":memory:").await.unwrap(); let db = SqliteStore::new(":memory:").await.unwrap();
@ -270,11 +332,11 @@ mod tests {
db.push(&record).await.unwrap(); db.push(&record).await.unwrap();
let len = db let len = db
.len(record.host, record.tag.as_str()) .len(record.host.id, record.tag.as_str())
.await .await
.expect("failed to get store len"); .expect("failed to get store len");
assert_eq!(len, 1, "expected length of 1 after insert"); assert_eq!(len, Some(1), "expected length of 1 after insert");
} }
#[tokio::test] #[tokio::test]
@ -290,11 +352,11 @@ mod tests {
db.push(&first).await.unwrap(); db.push(&first).await.unwrap();
db.push(&second).await.unwrap(); db.push(&second).await.unwrap();
let first_len = db.len(first.host, first.tag.as_str()).await.unwrap(); let first_len = db.len(first.host.id, first.tag.as_str()).await.unwrap();
let second_len = db.len(second.host, second.tag.as_str()).await.unwrap(); let second_len = db.len(second.host.id, second.tag.as_str()).await.unwrap();
assert_eq!(first_len, 1, "expected length of 1 after insert"); assert_eq!(first_len, Some(1), "expected length of 1 after insert");
assert_eq!(second_len, 1, "expected length of 1 after insert"); assert_eq!(second_len, Some(1), "expected length of 1 after insert");
} }
#[tokio::test] #[tokio::test]
@ -305,15 +367,13 @@ mod tests {
db.push(&tail).await.expect("failed to push record"); db.push(&tail).await.expect("failed to push record");
for _ in 1..100 { for _ in 1..100 {
tail = tail tail = tail.append(vec![1, 2, 3, 4]).encrypt::<PASETO_V4>(&[0; 32]);
.new_child(vec![1, 2, 3, 4])
.encrypt::<PASETO_V4>(&[0; 32]);
db.push(&tail).await.unwrap(); db.push(&tail).await.unwrap();
} }
assert_eq!( assert_eq!(
db.len(tail.host, tail.tag.as_str()).await.unwrap(), db.len(tail.host.id, tail.tag.as_str()).await.unwrap(),
100, Some(100),
"failed to insert 100 records" "failed to insert 100 records"
); );
} }
@ -328,50 +388,16 @@ mod tests {
records.push(tail.clone()); records.push(tail.clone());
for _ in 1..10000 { for _ in 1..10000 {
tail = tail.new_child(vec![1, 2, 3]).encrypt::<PASETO_V4>(&[0; 32]); tail = tail.append(vec![1, 2, 3]).encrypt::<PASETO_V4>(&[0; 32]);
records.push(tail.clone()); records.push(tail.clone());
} }
db.push_batch(records.iter()).await.unwrap(); db.push_batch(records.iter()).await.unwrap();
assert_eq!( assert_eq!(
db.len(tail.host, tail.tag.as_str()).await.unwrap(), db.len(tail.host.id, tail.tag.as_str()).await.unwrap(),
10000, Some(10000),
"failed to insert 10k records" "failed to insert 10k records"
); );
} }
#[tokio::test]
async fn test_chain() {
let db = SqliteStore::new(":memory:").await.unwrap();
let mut records: Vec<Record<EncryptedData>> = Vec::with_capacity(1000);
let mut tail = test_record();
records.push(tail.clone());
for _ in 1..1000 {
tail = tail.new_child(vec![1, 2, 3]).encrypt::<PASETO_V4>(&[0; 32]);
records.push(tail.clone());
}
db.push_batch(records.iter()).await.unwrap();
let mut record = db
.head(tail.host, tail.tag.as_str())
.await
.expect("in memory sqlite should not fail")
.expect("entry exists");
let mut count = 1;
while let Some(next) = db.next(&record).await.unwrap() {
assert_eq!(record.id, next.clone().parent.unwrap());
record = next;
count += 1;
}
assert_eq!(count, 1000);
}
} }

View File

@ -1,7 +1,7 @@
use async_trait::async_trait; use async_trait::async_trait;
use eyre::Result; use eyre::Result;
use atuin_common::record::{EncryptedData, HostId, Record, RecordId, RecordIndex}; use atuin_common::record::{EncryptedData, HostId, Record, RecordId, RecordIdx, RecordStatus};
/// A record store stores records /// A record store stores records
/// In more detail - we tend to need to process this into _another_ format to actually query it. /// In more detail - we tend to need to process this into _another_ format to actually query it.
@ -21,21 +21,32 @@ pub trait Store {
) -> Result<()>; ) -> Result<()>;
async fn get(&self, id: RecordId) -> Result<Record<EncryptedData>>; async fn get(&self, id: RecordId) -> Result<Record<EncryptedData>>;
async fn len(&self, host: HostId, tag: &str) -> Result<u64>; async fn len(&self, host: HostId, tag: &str) -> Result<Option<u64>>;
async fn last(&self, host: HostId, tag: &str) -> Result<Option<Record<EncryptedData>>>;
async fn first(&self, host: HostId, tag: &str) -> Result<Option<Record<EncryptedData>>>;
/// Get the record that follows this record /// Get the record that follows this record
async fn next(&self, record: &Record<EncryptedData>) -> Result<Option<Record<EncryptedData>>>; async fn next(
&self,
host: HostId,
tag: &str,
idx: RecordIdx,
limit: u64,
) -> Result<Vec<Record<EncryptedData>>>;
/// Get the first record for a given host and tag /// Get the first record for a given host and tag
async fn head(&self, host: HostId, tag: &str) -> Result<Option<Record<EncryptedData>>>; async fn idx(
&self,
host: HostId,
tag: &str,
idx: RecordIdx,
) -> Result<Option<Record<EncryptedData>>>;
/// Get the last record for a given host and tag async fn status(&self) -> Result<RecordStatus>;
async fn tail(&self, host: HostId, tag: &str) -> Result<Option<Record<EncryptedData>>>;
// Get the last record for all hosts for a given tag, useful for the read path of apps. /// Get every start record for a given tag, regardless of host.
async fn tag_tails(&self, tag: &str) -> Result<Vec<Record<EncryptedData>>>; /// Useful when actually operating on synchronized data, and will often have conflict
/// resolution applied.
// Get the latest host/tag/record tuple for every set in the store. useful for building an async fn all_tagged(&self, tag: &str) -> Result<Vec<Record<EncryptedData>>>;
// index
async fn tail_records(&self) -> Result<RecordIndex>;
} }

View File

@ -1,27 +1,54 @@
// do a sync :O // do a sync :O
use std::cmp::Ordering;
use eyre::Result; use eyre::Result;
use thiserror::Error;
use super::store::Store; use super::store::Store;
use crate::{api_client::Client, settings::Settings}; use crate::{api_client::Client, settings::Settings};
use atuin_common::record::{Diff, HostId, RecordId, RecordIndex}; use atuin_common::record::{Diff, HostId, RecordIdx, RecordStatus};
#[derive(Error, Debug)]
pub enum SyncError {
#[error("the local store is ahead of the remote, but for another host. has remote lost data?")]
LocalAheadOtherHost,
#[error("an issue with the local database occured")]
LocalStoreError,
#[error("something has gone wrong with the sync logic: {msg:?}")]
SyncLogicError { msg: String },
#[error("a request to the sync server failed")]
RemoteRequestError,
}
#[derive(Debug, Eq, PartialEq)] #[derive(Debug, Eq, PartialEq)]
pub enum Operation { pub enum Operation {
// Either upload or download until the tail matches the below // Either upload or download until the states matches the below
Upload { Upload {
tail: RecordId, local: RecordIdx,
remote: Option<RecordIdx>,
host: HostId, host: HostId,
tag: String, tag: String,
}, },
Download { Download {
tail: RecordId, local: Option<RecordIdx>,
remote: RecordIdx,
host: HostId,
tag: String,
},
Noop {
host: HostId, host: HostId,
tag: String, tag: String,
}, },
} }
pub async fn diff(settings: &Settings, store: &mut impl Store) -> Result<(Vec<Diff>, RecordIndex)> { pub async fn diff(
settings: &Settings,
store: &mut impl Store,
) -> Result<(Vec<Diff>, RecordStatus)> {
let client = Client::new( let client = Client::new(
&settings.sync_address, &settings.sync_address,
&settings.session_token, &settings.session_token,
@ -29,8 +56,8 @@ pub async fn diff(settings: &Settings, store: &mut impl Store) -> Result<(Vec<Di
settings.network_timeout, settings.network_timeout,
)?; )?;
let local_index = store.tail_records().await?; let local_index = store.status().await?;
let remote_index = client.record_index().await?; let remote_index = client.record_status().await?;
let diff = local_index.diff(&remote_index); let diff = local_index.diff(&remote_index);
@ -41,39 +68,58 @@ pub async fn diff(settings: &Settings, store: &mut impl Store) -> Result<(Vec<Di
// With the store as context, we can determine if a tail exists locally or not and therefore if it needs uploading or download. // With the store as context, we can determine if a tail exists locally or not and therefore if it needs uploading or download.
// In theory this could be done as a part of the diffing stage, but it's easier to reason // In theory this could be done as a part of the diffing stage, but it's easier to reason
// about and test this way // about and test this way
pub async fn operations(diffs: Vec<Diff>, store: &impl Store) -> Result<Vec<Operation>> { pub async fn operations(
diffs: Vec<Diff>,
_store: &impl Store,
) -> Result<Vec<Operation>, SyncError> {
let mut operations = Vec::with_capacity(diffs.len()); let mut operations = Vec::with_capacity(diffs.len());
let _host = Settings::host_id().expect("got to record sync without a host id; abort");
for diff in diffs { for diff in diffs {
// First, try to fetch the tail let op = match (diff.local, diff.remote) {
// If it exists locally, then that means we need to update the remote // We both have it! Could be either. Compare.
// host until it has the same tail. Ie, upload. (Some(local), Some(remote)) => match local.cmp(&remote) {
// If it does not exist locally, that means remote is ahead of us. Ordering::Equal => Operation::Noop {
// Therefore, we need to download until our local tail matches host: diff.host,
let record = store.get(diff.tail).await; tag: diff.tag,
},
Ordering::Greater => Operation::Upload {
local,
remote: Some(remote),
host: diff.host,
tag: diff.tag,
},
Ordering::Less => Operation::Download {
local: Some(local),
remote,
host: diff.host,
tag: diff.tag,
},
},
let op = if record.is_ok() { // Remote has it, we don't. Gotta be download
// if local has the ID, then we should find the actual tail of this (None, Some(remote)) => Operation::Download {
// store, so we know what we need to update the remote to. local: None,
let tail = store remote,
.tail(diff.host, diff.tag.as_str())
.await?
.expect("failed to fetch last record, expected tag/host to exist");
// TODO(ellie) update the diffing so that it stores the context of the current tail
// that way, we can determine how much we need to upload.
// For now just keep uploading until tails match
Operation::Upload {
tail: tail.id,
host: diff.host, host: diff.host,
tag: diff.tag, tag: diff.tag,
} },
} else {
Operation::Download { // We have it, remote doesn't. Gotta be upload.
tail: diff.tail, (Some(local), None) => Operation::Upload {
local,
remote: None,
host: diff.host, host: diff.host,
tag: diff.tag, tag: diff.tag,
},
// something is pretty fucked.
(None, None) => {
return Err(SyncError::SyncLogicError {
msg: String::from(
"diff has nothing for local or remote - (host, tag) does not exist",
),
})
} }
}; };
@ -86,8 +132,11 @@ pub async fn operations(diffs: Vec<Diff>, store: &impl Store) -> Result<Vec<Oper
// with the same properties // with the same properties
operations.sort_by_key(|op| match op { operations.sort_by_key(|op| match op {
Operation::Upload { tail, host, .. } => ("upload", *host, *tail), Operation::Noop { host, tag } => (0, *host, tag.clone()),
Operation::Download { tail, host, .. } => ("download", *host, *tail),
Operation::Upload { host, tag, .. } => (1, *host, tag.clone()),
Operation::Download { host, tag, .. } => (2, *host, tag.clone()),
}); });
Ok(operations) Ok(operations)
@ -95,140 +144,105 @@ pub async fn operations(diffs: Vec<Diff>, store: &impl Store) -> Result<Vec<Oper
async fn sync_upload( async fn sync_upload(
store: &mut impl Store, store: &mut impl Store,
remote_index: &RecordIndex,
client: &Client<'_>, client: &Client<'_>,
op: (HostId, String, RecordId), host: HostId,
) -> Result<i64> { tag: String,
local: RecordIdx,
remote: Option<RecordIdx>,
) -> Result<i64, SyncError> {
let remote = remote.unwrap_or(0);
let expected = local - remote;
let upload_page_size = 100; let upload_page_size = 100;
let mut total = 0; let mut progress = 0;
// so. we have an upload operation, with the tail representing the state
// we want to get the remote to
let current_tail = remote_index.get(op.0, op.1.clone());
println!( println!(
"Syncing local {:?}/{}/{:?}, remote has {:?}", "Uploading {} records to {}/{}",
op.0, op.1, op.2, current_tail expected,
host.0.as_simple(),
tag
); );
let start = if let Some(current_tail) = current_tail { // preload with the first entry if remote does not know of this store
current_tail while progress < expected {
} else { let page = store
store .next(host, tag.as_str(), remote + progress, upload_page_size)
.head(op.0, op.1.as_str())
.await .await
.expect("failed to fetch host/tag head") .map_err(|_| SyncError::LocalStoreError)?;
.expect("host/tag not in current index")
.id
};
debug!("starting push to remote from: {:?}", start); client
.post_records(&page)
.await
.map_err(|_| SyncError::RemoteRequestError)?;
// we have the start point for sync. it is either the head of the store if println!(
// the remote has no data for it, or the tail that the remote has "uploaded {} to remote, progress {}/{}",
// we need to iterate from the remote tail, and keep going until page.len(),
// remote tail = current local tail progress,
expected
let mut record = if current_tail.is_some() { );
let r = store.get(start).await.unwrap(); progress += page.len() as u64;
store.next(&r).await?
} else {
Some(store.get(start).await.unwrap())
};
let mut buf = Vec::with_capacity(upload_page_size);
while let Some(r) = record {
if buf.len() < upload_page_size {
buf.push(r.clone());
} else {
client.post_records(&buf).await?;
// can we reset what we have? len = 0 but keep capacity
buf = Vec::with_capacity(upload_page_size);
}
record = store.next(&r).await?;
total += 1;
} }
if !buf.is_empty() { Ok(progress as i64)
client.post_records(&buf).await?;
}
Ok(total)
} }
async fn sync_download( async fn sync_download(
store: &mut impl Store, store: &mut impl Store,
remote_index: &RecordIndex,
client: &Client<'_>, client: &Client<'_>,
op: (HostId, String, RecordId), host: HostId,
) -> Result<i64> { tag: String,
// TODO(ellie): implement variable page sizing like on history sync local: Option<RecordIdx>,
let download_page_size = 1000; remote: RecordIdx,
) -> Result<i64, SyncError> {
let local = local.unwrap_or(0);
let expected = remote - local;
let download_page_size = 100;
let mut progress = 0;
let mut total = 0; println!(
"Downloading {} records from {}/{}",
expected,
host.0.as_simple(),
tag
);
// We know that the remote is ahead of us, so let's keep downloading until both // preload with the first entry if remote does not know of this store
// 1) The remote stops returning full pages while progress < expected {
// 2) The tail equals what we expect let page = client
// .next_records(host, tag.clone(), local + progress, download_page_size)
// If (1) occurs without (2), then something is wrong with our index calculation .await
// and we should bail. .map_err(|_| SyncError::RemoteRequestError)?;
let remote_tail = remote_index
.get(op.0, op.1.clone())
.expect("remote index does not contain expected tail during download");
let local_tail = store.tail(op.0, op.1.as_str()).await?;
//
// We expect that the operations diff will represent the desired state
// In this case, that contains the remote tail.
assert_eq!(remote_tail, op.2);
println!("Downloading {:?}/{}/{:?} to local", op.0, op.1, op.2); store
.push_batch(page.iter())
.await
.map_err(|_| SyncError::LocalStoreError)?;
let mut records = client println!(
.next_records( "downloaded {} records from remote, progress {}/{}",
op.0, page.len(),
op.1.clone(), progress,
local_tail.map(|r| r.id), expected
download_page_size, );
)
.await?;
while !records.is_empty() { progress += page.len() as u64;
total += std::cmp::min(download_page_size, records.len() as u64);
store.push_batch(records.iter()).await?;
if records.last().unwrap().id == remote_tail {
break;
}
records = client
.next_records(
op.0,
op.1.clone(),
records.last().map(|r| r.id),
download_page_size,
)
.await?;
} }
Ok(total as i64) Ok(progress as i64)
} }
pub async fn sync_remote( pub async fn sync_remote(
operations: Vec<Operation>, operations: Vec<Operation>,
remote_index: &RecordIndex,
local_store: &mut impl Store, local_store: &mut impl Store,
settings: &Settings, settings: &Settings,
) -> Result<(i64, i64)> { ) -> Result<(i64, i64), SyncError> {
let client = Client::new( let client = Client::new(
&settings.sync_address, &settings.sync_address,
&settings.session_token, &settings.session_token,
settings.network_connect_timeout, settings.network_connect_timeout,
settings.network_timeout, settings.network_timeout,
)?; )
.expect("failed to create client");
let mut uploaded = 0; let mut uploaded = 0;
let mut downloaded = 0; let mut downloaded = 0;
@ -236,14 +250,23 @@ pub async fn sync_remote(
// this can totally run in parallel, but lets get it working first // this can totally run in parallel, but lets get it working first
for i in operations { for i in operations {
match i { match i {
Operation::Upload { tail, host, tag } => { Operation::Upload {
uploaded += host,
sync_upload(local_store, remote_index, &client, (host, tag, tail)).await? tag,
} local,
Operation::Download { tail, host, tag } => { remote,
downloaded += } => uploaded += sync_upload(local_store, &client, host, tag, local, remote).await?,
sync_download(local_store, remote_index, &client, (host, tag, tail)).await?
Operation::Download {
host,
tag,
local,
remote,
} => {
downloaded += sync_download(local_store, &client, host, tag, local, remote).await?
} }
Operation::Noop { .. } => continue,
} }
} }
@ -264,13 +287,16 @@ mod tests {
fn test_record() -> Record<EncryptedData> { fn test_record() -> Record<EncryptedData> {
Record::builder() Record::builder()
.host(HostId(atuin_common::utils::uuid_v7())) .host(atuin_common::record::Host::new(HostId(
atuin_common::utils::uuid_v7(),
)))
.version("v1".into()) .version("v1".into())
.tag(atuin_common::utils::uuid_v7().simple().to_string()) .tag(atuin_common::utils::uuid_v7().simple().to_string())
.data(EncryptedData { .data(EncryptedData {
data: String::new(), data: String::new(),
content_encryption_key: String::new(), content_encryption_key: String::new(),
}) })
.idx(0)
.build() .build()
} }
@ -296,8 +322,8 @@ mod tests {
remote_store.push(&i).await.unwrap(); remote_store.push(&i).await.unwrap();
} }
let local_index = local_store.tail_records().await.unwrap(); let local_index = local_store.status().await.unwrap();
let remote_index = remote_store.tail_records().await.unwrap(); let remote_index = remote_store.status().await.unwrap();
let diff = local_index.diff(&remote_index); let diff = local_index.diff(&remote_index);
@ -320,9 +346,10 @@ mod tests {
assert_eq!( assert_eq!(
operations[0], operations[0],
Operation::Upload { Operation::Upload {
host: record.host, host: record.host.id,
tag: record.tag, tag: record.tag,
tail: record.id local: record.idx,
remote: None,
} }
); );
} }
@ -333,12 +360,14 @@ mod tests {
// another. One upload, one download // another. One upload, one download
let shared_record = test_record(); let shared_record = test_record();
let remote_ahead = test_record(); let remote_ahead = test_record();
let local_ahead = shared_record let local_ahead = shared_record
.new_child(vec![1, 2, 3]) .append(vec![1, 2, 3])
.encrypt::<PASETO_V4>(&[0; 32]); .encrypt::<PASETO_V4>(&[0; 32]);
assert_eq!(local_ahead.idx, 1);
let local = vec![shared_record.clone(), local_ahead.clone()]; // local knows about the already synced, and something newer in the same store let local = vec![shared_record.clone(), local_ahead.clone()]; // local knows about the already synced, and something newer in the same store
let remote = vec![shared_record.clone(), remote_ahead.clone()]; // remote knows about the already-synced, and one new record in a new store let remote = vec![shared_record.clone(), remote_ahead.clone()]; // remote knows about the already-synced, and one new record in a new store
@ -350,15 +379,19 @@ mod tests {
assert_eq!( assert_eq!(
operations, operations,
vec![ vec![
Operation::Download { // Or in otherwords, local is ahead by one
tail: remote_ahead.id,
host: remote_ahead.host,
tag: remote_ahead.tag,
},
Operation::Upload { Operation::Upload {
tail: local_ahead.id, host: local_ahead.host.id,
host: local_ahead.host,
tag: local_ahead.tag, tag: local_ahead.tag,
local: 1,
remote: Some(0),
},
// Or in other words, remote knows of a record in an entirely new store (tag)
Operation::Download {
host: remote_ahead.host.id,
tag: remote_ahead.tag,
local: None,
remote: 0,
}, },
] ]
); );
@ -371,66 +404,160 @@ mod tests {
// One known only by remote // One known only by remote
let shared_record = test_record(); let shared_record = test_record();
let local_only = test_record();
let remote_known = test_record(); let local_only_20 = test_record();
let local_known = test_record(); let local_only_21 = local_only_20
.append(vec![1, 2, 3])
.encrypt::<PASETO_V4>(&[0; 32]);
let local_only_22 = local_only_21
.append(vec![1, 2, 3])
.encrypt::<PASETO_V4>(&[0; 32]);
let local_only_23 = local_only_22
.append(vec![1, 2, 3])
.encrypt::<PASETO_V4>(&[0; 32]);
let remote_only = test_record();
let remote_only_20 = test_record();
let remote_only_21 = remote_only_20
.append(vec![2, 3, 2])
.encrypt::<PASETO_V4>(&[0; 32]);
let remote_only_22 = remote_only_21
.append(vec![2, 3, 2])
.encrypt::<PASETO_V4>(&[0; 32]);
let remote_only_23 = remote_only_22
.append(vec![2, 3, 2])
.encrypt::<PASETO_V4>(&[0; 32]);
let remote_only_24 = remote_only_23
.append(vec![2, 3, 2])
.encrypt::<PASETO_V4>(&[0; 32]);
let second_shared = test_record(); let second_shared = test_record();
let second_shared_remote_ahead = second_shared let second_shared_remote_ahead = second_shared
.new_child(vec![1, 2, 3]) .append(vec![1, 2, 3])
.encrypt::<PASETO_V4>(&[0; 32]);
let second_shared_remote_ahead2 = second_shared_remote_ahead
.append(vec![1, 2, 3])
.encrypt::<PASETO_V4>(&[0; 32]); .encrypt::<PASETO_V4>(&[0; 32]);
let local_ahead = shared_record let third_shared = test_record();
.new_child(vec![1, 2, 3]) let third_shared_local_ahead = third_shared
.append(vec![1, 2, 3])
.encrypt::<PASETO_V4>(&[0; 32]);
let third_shared_local_ahead2 = third_shared_local_ahead
.append(vec![1, 2, 3])
.encrypt::<PASETO_V4>(&[0; 32]);
let fourth_shared = test_record();
let fourth_shared_remote_ahead = fourth_shared
.append(vec![1, 2, 3])
.encrypt::<PASETO_V4>(&[0; 32]);
let fourth_shared_remote_ahead2 = fourth_shared_remote_ahead
.append(vec![1, 2, 3])
.encrypt::<PASETO_V4>(&[0; 32]); .encrypt::<PASETO_V4>(&[0; 32]);
let local = vec![ let local = vec![
shared_record.clone(), shared_record.clone(),
second_shared.clone(), second_shared.clone(),
local_known.clone(), third_shared.clone(),
local_ahead.clone(), fourth_shared.clone(),
fourth_shared_remote_ahead.clone(),
// single store, only local has it
local_only.clone(),
// bigger store, also only known by local
local_only_20.clone(),
local_only_21.clone(),
local_only_22.clone(),
local_only_23.clone(),
// another shared store, but local is ahead on this one
third_shared_local_ahead.clone(),
third_shared_local_ahead2.clone(),
]; ];
let remote = vec![ let remote = vec![
remote_only.clone(),
remote_only_20.clone(),
remote_only_21.clone(),
remote_only_22.clone(),
remote_only_23.clone(),
remote_only_24.clone(),
shared_record.clone(), shared_record.clone(),
second_shared.clone(), second_shared.clone(),
third_shared.clone(),
second_shared_remote_ahead.clone(), second_shared_remote_ahead.clone(),
remote_known.clone(), second_shared_remote_ahead2.clone(),
fourth_shared.clone(),
fourth_shared_remote_ahead.clone(),
fourth_shared_remote_ahead2.clone(),
]; // remote knows about the already-synced, and one new record in a new store ]; // remote knows about the already-synced, and one new record in a new store
let (store, diff) = build_test_diff(local, remote).await; let (store, diff) = build_test_diff(local, remote).await;
let operations = sync::operations(diff, &store).await.unwrap(); let operations = sync::operations(diff, &store).await.unwrap();
assert_eq!(operations.len(), 4); assert_eq!(operations.len(), 7);
let mut result_ops = vec![ let mut result_ops = vec![
// We started with a shared record, but the remote knows of two newer records in the
// same store
Operation::Download { Operation::Download {
tail: remote_known.id, local: Some(0),
host: remote_known.host, remote: 2,
tag: remote_known.tag, host: second_shared_remote_ahead.host.id,
tag: second_shared_remote_ahead.tag,
}, },
// We have a shared record, local knows of the first two but not the last
Operation::Download { Operation::Download {
tail: second_shared_remote_ahead.id, local: Some(1),
host: second_shared.host, remote: 2,
tag: second_shared.tag, host: fourth_shared_remote_ahead2.host.id,
tag: fourth_shared_remote_ahead2.tag,
}, },
Operation::Upload { // Remote knows of a store with a single record that local does not have
tail: local_ahead.id, Operation::Download {
host: local_ahead.host, local: None,
tag: local_ahead.tag, remote: 0,
host: remote_only.host.id,
tag: remote_only.tag,
}, },
// Remote knows of a store with a bunch of records that local does not have
Operation::Download {
local: None,
remote: 4,
host: remote_only_20.host.id,
tag: remote_only_20.tag,
},
// Local knows of a record in a store that remote does not have
Operation::Upload { Operation::Upload {
tail: local_known.id, local: 0,
host: local_known.host, remote: None,
tag: local_known.tag, host: local_only.host.id,
tag: local_only.tag,
},
// Local knows of 4 records in a store that remote does not have
Operation::Upload {
local: 3,
remote: None,
host: local_only_20.host.id,
tag: local_only_20.tag,
},
// Local knows of 2 more records in a shared store that remote only has one of
Operation::Upload {
local: 2,
remote: Some(0),
host: third_shared.host.id,
tag: third_shared.tag,
}, },
]; ];
result_ops.sort_by_key(|op| match op { result_ops.sort_by_key(|op| match op {
Operation::Upload { tail, host, .. } => ("upload", *host, *tail), Operation::Noop { host, tag } => (0, *host, tag.clone()),
Operation::Download { tail, host, .. } => ("download", *host, *tail),
Operation::Upload { host, tag, .. } => (1, *host, tag.clone()),
Operation::Download { host, tag, .. } => (2, *host, tag.clone()),
}); });
assert_eq!(operations, result_ops); assert_eq!(result_ops, operations);
} }
} }

View File

@ -145,30 +145,18 @@ pub enum WordJumpMode {
#[derive(Clone, Debug, Deserialize)] #[derive(Clone, Debug, Deserialize)]
pub struct Stats { pub struct Stats {
#[serde(default = "Stats::common_prefix_default")]
pub common_prefix: Vec<String>, // sudo, etc. commands we want to strip off pub common_prefix: Vec<String>, // sudo, etc. commands we want to strip off
#[serde(default = "Stats::common_subcommands_default")]
pub common_subcommands: Vec<String>, // kubectl, commands we should consider subcommands for pub common_subcommands: Vec<String>, // kubectl, commands we should consider subcommands for
} }
impl Stats {
fn common_prefix_default() -> Vec<String> {
vec!["sudo", "doas"].into_iter().map(String::from).collect()
}
fn common_subcommands_default() -> Vec<String> {
vec!["cargo", "go", "git", "npm", "yarn", "pnpm", "kubectl"]
.into_iter()
.map(String::from)
.collect()
}
}
impl Default for Stats { impl Default for Stats {
fn default() -> Self { fn default() -> Self {
Self { Self {
common_prefix: Self::common_prefix_default(), common_prefix: vec!["sudo", "doas"].into_iter().map(String::from).collect(),
common_subcommands: Self::common_subcommands_default(), common_subcommands: vec!["cargo", "go", "git", "npm", "yarn", "pnpm", "kubectl"]
.into_iter()
.map(String::from)
.collect(),
} }
} }
} }

View File

@ -20,9 +20,6 @@ rand = { workspace = true }
typed-builder = { workspace = true } typed-builder = { workspace = true }
eyre = { workspace = true } eyre = { workspace = true }
sqlx = { workspace = true } sqlx = { workspace = true }
semver = { workspace = true }
lazy_static = "1.4.0"
[dev-dependencies] [dev-dependencies]
pretty_assertions = { workspace = true } pretty_assertions = { workspace = true }

View File

@ -1,18 +1,7 @@
use lazy_static::lazy_static;
use semver::Version;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use std::borrow::Cow; use std::borrow::Cow;
use time::OffsetDateTime; use time::OffsetDateTime;
// the usage of X- has been deprecated for quite along time, it turns out
pub static ATUIN_HEADER_VERSION: &str = "Atuin-Version";
pub static ATUIN_CARGO_VERSION: &str = env!("CARGO_PKG_VERSION");
lazy_static! {
pub static ref ATUIN_VERSION: Version =
Version::parse(ATUIN_CARGO_VERSION).expect("failed to parse self semver");
}
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize)]
pub struct UserResponse { pub struct UserResponse {
pub username: String, pub username: String,

View File

@ -18,9 +18,30 @@ pub struct EncryptedData {
pub struct Diff { pub struct Diff {
pub host: HostId, pub host: HostId,
pub tag: String, pub tag: String,
pub tail: RecordId, pub local: Option<RecordIdx>,
pub remote: Option<RecordIdx>,
} }
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq)]
pub struct Host {
pub id: HostId,
pub name: String,
}
impl Host {
pub fn new(id: HostId) -> Self {
Host {
id,
name: String::new(),
}
}
}
new_uuid!(RecordId);
new_uuid!(HostId);
pub type RecordIdx = u64;
/// A single record stored inside of our local database /// A single record stored inside of our local database
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize, TypedBuilder)] #[derive(Debug, Clone, PartialEq, Serialize, Deserialize, TypedBuilder)]
pub struct Record<Data> { pub struct Record<Data> {
@ -28,18 +49,14 @@ pub struct Record<Data> {
#[builder(default = RecordId(crate::utils::uuid_v7()))] #[builder(default = RecordId(crate::utils::uuid_v7()))]
pub id: RecordId, pub id: RecordId,
/// The integer record ID. This is only unique per (host, tag).
pub idx: RecordIdx,
/// The unique ID of the host. /// The unique ID of the host.
// TODO(ellie): Optimize the storage here. We use a bunch of IDs, and currently store // TODO(ellie): Optimize the storage here. We use a bunch of IDs, and currently store
// as strings. I would rather avoid normalization, so store as UUID binary instead of // as strings. I would rather avoid normalization, so store as UUID binary instead of
// encoding to a string and wasting much more storage. // encoding to a string and wasting much more storage.
pub host: HostId, pub host: Host,
/// The ID of the parent entry
// A store is technically just a double linked list
// We can do some cheating with the timestamps, but should not rely upon them.
// Clocks are tricksy.
#[builder(default)]
pub parent: Option<RecordId>,
/// The creation time in nanoseconds since unix epoch /// The creation time in nanoseconds since unix epoch
#[builder(default = time::OffsetDateTime::now_utc().unix_timestamp_nanos() as u64)] #[builder(default = time::OffsetDateTime::now_utc().unix_timestamp_nanos() as u64)]
@ -56,25 +73,22 @@ pub struct Record<Data> {
pub data: Data, pub data: Data,
} }
new_uuid!(RecordId);
new_uuid!(HostId);
/// Extra data from the record that should be encoded in the data /// Extra data from the record that should be encoded in the data
#[derive(Debug, Copy, Clone)] #[derive(Debug, Copy, Clone)]
pub struct AdditionalData<'a> { pub struct AdditionalData<'a> {
pub id: &'a RecordId, pub id: &'a RecordId,
pub idx: &'a u64,
pub version: &'a str, pub version: &'a str,
pub tag: &'a str, pub tag: &'a str,
pub host: &'a HostId, pub host: &'a HostId,
pub parent: Option<&'a RecordId>,
} }
impl<Data> Record<Data> { impl<Data> Record<Data> {
pub fn new_child(&self, data: Vec<u8>) -> Record<DecryptedData> { pub fn append(&self, data: Vec<u8>) -> Record<DecryptedData> {
Record::builder() Record::builder()
.host(self.host) .host(self.host.clone())
.version(self.version.clone()) .version(self.version.clone())
.parent(Some(self.id)) .idx(self.idx + 1)
.tag(self.tag.clone()) .tag(self.tag.clone())
.data(DecryptedData(data)) .data(DecryptedData(data))
.build() .build()
@ -84,74 +98,76 @@ impl<Data> Record<Data> {
/// An index representing the current state of the record stores /// An index representing the current state of the record stores
/// This can be both remote, or local, and compared in either direction /// This can be both remote, or local, and compared in either direction
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize)]
pub struct RecordIndex { pub struct RecordStatus {
// A map of host -> tag -> tail // A map of host -> tag -> max(idx)
pub hosts: HashMap<HostId, HashMap<String, RecordId>>, pub hosts: HashMap<HostId, HashMap<String, RecordIdx>>,
} }
impl Default for RecordIndex { impl Default for RecordStatus {
fn default() -> Self { fn default() -> Self {
Self::new() Self::new()
} }
} }
impl Extend<(HostId, String, RecordId)> for RecordIndex { impl Extend<(HostId, String, RecordIdx)> for RecordStatus {
fn extend<T: IntoIterator<Item = (HostId, String, RecordId)>>(&mut self, iter: T) { fn extend<T: IntoIterator<Item = (HostId, String, RecordIdx)>>(&mut self, iter: T) {
for (host, tag, tail_id) in iter { for (host, tag, tail_idx) in iter {
self.set_raw(host, tag, tail_id); self.set_raw(host, tag, tail_idx);
} }
} }
} }
impl RecordIndex { impl RecordStatus {
pub fn new() -> RecordIndex { pub fn new() -> RecordStatus {
RecordIndex { RecordStatus {
hosts: HashMap::new(), hosts: HashMap::new(),
} }
} }
/// Insert a new tail record into the store /// Insert a new tail record into the store
pub fn set(&mut self, tail: Record<DecryptedData>) { pub fn set(&mut self, tail: Record<DecryptedData>) {
self.set_raw(tail.host, tail.tag, tail.id) self.set_raw(tail.host.id, tail.tag, tail.idx)
} }
pub fn set_raw(&mut self, host: HostId, tag: String, tail_id: RecordId) { pub fn set_raw(&mut self, host: HostId, tag: String, tail_id: RecordIdx) {
self.hosts.entry(host).or_default().insert(tag, tail_id); self.hosts.entry(host).or_default().insert(tag, tail_id);
} }
pub fn get(&self, host: HostId, tag: String) -> Option<RecordId> { pub fn get(&self, host: HostId, tag: String) -> Option<RecordIdx> {
self.hosts.get(&host).and_then(|v| v.get(&tag)).cloned() self.hosts.get(&host).and_then(|v| v.get(&tag)).cloned()
} }
/// Diff this index with another, likely remote index. /// Diff this index with another, likely remote index.
/// The two diffs can then be reconciled, and the optimal change set calculated /// The two diffs can then be reconciled, and the optimal change set calculated
/// Returns a tuple, with (host, tag, Option(OTHER)) /// Returns a tuple, with (host, tag, Option(OTHER))
/// OTHER is set to the value of the tail on the other machine. For example, if the /// OTHER is set to the value of the idx on the other machine. If it is greater than our index,
/// other machine has a different tail, it will be the differing tail. This is useful to /// then we need to do some downloading. If it is smaller, then we need to do some uploading
/// check if the other index is ahead of us, or behind. /// Note that we cannot upload if we are not the owner of the record store - hosts can only
/// If the other index does not have the (host, tag) pair, then the other value will be None. /// write to their own store.
pub fn diff(&self, other: &Self) -> Vec<Diff> { pub fn diff(&self, other: &Self) -> Vec<Diff> {
let mut ret = Vec::new(); let mut ret = Vec::new();
// First, we check if other has everything that self has // First, we check if other has everything that self has
for (host, tag_map) in self.hosts.iter() { for (host, tag_map) in self.hosts.iter() {
for (tag, tail) in tag_map.iter() { for (tag, idx) in tag_map.iter() {
match other.get(*host, tag.clone()) { match other.get(*host, tag.clone()) {
// The other store is all up to date! No diff. // The other store is all up to date! No diff.
Some(t) if t.eq(tail) => continue, Some(t) if t.eq(idx) => continue,
// The other store does exist, but it is either ahead or behind us. A diff regardless // The other store does exist, and it is either ahead or behind us. A diff regardless
Some(t) => ret.push(Diff { Some(t) => ret.push(Diff {
host: *host, host: *host,
tag: tag.clone(), tag: tag.clone(),
tail: t, local: Some(*idx),
remote: Some(t),
}), }),
// The other store does not exist :O // The other store does not exist :O
None => ret.push(Diff { None => ret.push(Diff {
host: *host, host: *host,
tag: tag.clone(), tag: tag.clone(),
tail: *tail, local: Some(*idx),
remote: None,
}), }),
}; };
} }
@ -162,7 +178,7 @@ impl RecordIndex {
// account for that! // account for that!
for (host, tag_map) in other.hosts.iter() { for (host, tag_map) in other.hosts.iter() {
for (tag, tail) in tag_map.iter() { for (tag, idx) in tag_map.iter() {
match self.get(*host, tag.clone()) { match self.get(*host, tag.clone()) {
// If we have this host/tag combo, the comparison and diff will have already happened above // If we have this host/tag combo, the comparison and diff will have already happened above
Some(_) => continue, Some(_) => continue,
@ -170,13 +186,22 @@ impl RecordIndex {
None => ret.push(Diff { None => ret.push(Diff {
host: *host, host: *host,
tag: tag.clone(), tag: tag.clone(),
tail: *tail, remote: Some(*idx),
local: None,
}), }),
}; };
} }
} }
ret.sort_by(|a, b| (a.host, a.tag.clone(), a.tail).cmp(&(b.host, b.tag.clone(), b.tail))); // Stability is a nice property to have
ret.sort_by(|a, b| {
(a.host, a.tag.clone(), a.local, a.remote).cmp(&(
b.host,
b.tag.clone(),
b.local,
b.remote,
))
});
ret ret
} }
} }
@ -201,14 +226,14 @@ impl Record<DecryptedData> {
id: &self.id, id: &self.id,
version: &self.version, version: &self.version,
tag: &self.tag, tag: &self.tag,
host: &self.host, host: &self.host.id,
parent: self.parent.as_ref(), idx: &self.idx,
}; };
Record { Record {
data: E::encrypt(self.data, ad, key), data: E::encrypt(self.data, ad, key),
id: self.id, id: self.id,
host: self.host, host: self.host,
parent: self.parent, idx: self.idx,
timestamp: self.timestamp, timestamp: self.timestamp,
version: self.version, version: self.version,
tag: self.tag, tag: self.tag,
@ -222,14 +247,14 @@ impl Record<EncryptedData> {
id: &self.id, id: &self.id,
version: &self.version, version: &self.version,
tag: &self.tag, tag: &self.tag,
host: &self.host, host: &self.host.id,
parent: self.parent.as_ref(), idx: &self.idx,
}; };
Ok(Record { Ok(Record {
data: E::decrypt(self.data, ad, key)?, data: E::decrypt(self.data, ad, key)?,
id: self.id, id: self.id,
host: self.host, host: self.host,
parent: self.parent, idx: self.idx,
timestamp: self.timestamp, timestamp: self.timestamp,
version: self.version, version: self.version,
tag: self.tag, tag: self.tag,
@ -245,14 +270,14 @@ impl Record<EncryptedData> {
id: &self.id, id: &self.id,
version: &self.version, version: &self.version,
tag: &self.tag, tag: &self.tag,
host: &self.host, host: &self.host.id,
parent: self.parent.as_ref(), idx: &self.idx,
}; };
Ok(Record { Ok(Record {
data: E::re_encrypt(self.data, ad, old_key, new_key)?, data: E::re_encrypt(self.data, ad, old_key, new_key)?,
id: self.id, id: self.id,
host: self.host, host: self.host,
parent: self.parent, idx: self.idx,
timestamp: self.timestamp, timestamp: self.timestamp,
version: self.version, version: self.version,
tag: self.tag, tag: self.tag,
@ -262,31 +287,32 @@ impl Record<EncryptedData> {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use crate::record::HostId; use crate::record::{Host, HostId};
use super::{DecryptedData, Diff, Record, RecordIndex}; use super::{DecryptedData, Diff, Record, RecordStatus};
use pretty_assertions::assert_eq; use pretty_assertions::assert_eq;
fn test_record() -> Record<DecryptedData> { fn test_record() -> Record<DecryptedData> {
Record::builder() Record::builder()
.host(HostId(crate::utils::uuid_v7())) .host(Host::new(HostId(crate::utils::uuid_v7())))
.version("v1".into()) .version("v1".into())
.tag(crate::utils::uuid_v7().simple().to_string()) .tag(crate::utils::uuid_v7().simple().to_string())
.data(DecryptedData(vec![0, 1, 2, 3])) .data(DecryptedData(vec![0, 1, 2, 3]))
.idx(0)
.build() .build()
} }
#[test] #[test]
fn record_index() { fn record_index() {
let mut index = RecordIndex::new(); let mut index = RecordStatus::new();
let record = test_record(); let record = test_record();
index.set(record.clone()); index.set(record.clone());
let tail = index.get(record.host, record.tag); let tail = index.get(record.host.id, record.tag);
assert_eq!( assert_eq!(
record.id, record.idx,
tail.expect("tail not in store"), tail.expect("tail not in store"),
"tail in store did not match" "tail in store did not match"
); );
@ -294,17 +320,17 @@ mod tests {
#[test] #[test]
fn record_index_overwrite() { fn record_index_overwrite() {
let mut index = RecordIndex::new(); let mut index = RecordStatus::new();
let record = test_record(); let record = test_record();
let child = record.new_child(vec![1, 2, 3]); let child = record.append(vec![1, 2, 3]);
index.set(record.clone()); index.set(record.clone());
index.set(child.clone()); index.set(child.clone());
let tail = index.get(record.host, record.tag); let tail = index.get(record.host.id, record.tag);
assert_eq!( assert_eq!(
child.id, child.idx,
tail.expect("tail not in store"), tail.expect("tail not in store"),
"tail in store did not match" "tail in store did not match"
); );
@ -314,8 +340,8 @@ mod tests {
fn record_index_no_diff() { fn record_index_no_diff() {
// Here, they both have the same version and should have no diff // Here, they both have the same version and should have no diff
let mut index1 = RecordIndex::new(); let mut index1 = RecordStatus::new();
let mut index2 = RecordIndex::new(); let mut index2 = RecordStatus::new();
let record1 = test_record(); let record1 = test_record();
@ -331,11 +357,11 @@ mod tests {
fn record_index_single_diff() { fn record_index_single_diff() {
// Here, they both have the same stores, but one is ahead by a single record // Here, they both have the same stores, but one is ahead by a single record
let mut index1 = RecordIndex::new(); let mut index1 = RecordStatus::new();
let mut index2 = RecordIndex::new(); let mut index2 = RecordStatus::new();
let record1 = test_record(); let record1 = test_record();
let record2 = record1.new_child(vec![1, 2, 3]); let record2 = record1.append(vec![1, 2, 3]);
index1.set(record1); index1.set(record1);
index2.set(record2.clone()); index2.set(record2.clone());
@ -346,9 +372,10 @@ mod tests {
assert_eq!( assert_eq!(
diff[0], diff[0],
Diff { Diff {
host: record2.host, host: record2.host.id,
tag: record2.tag, tag: record2.tag,
tail: record2.id remote: Some(1),
local: Some(0)
} }
); );
} }
@ -356,14 +383,14 @@ mod tests {
#[test] #[test]
fn record_index_multi_diff() { fn record_index_multi_diff() {
// A much more complex case, with a bunch more checks // A much more complex case, with a bunch more checks
let mut index1 = RecordIndex::new(); let mut index1 = RecordStatus::new();
let mut index2 = RecordIndex::new(); let mut index2 = RecordStatus::new();
let store1record1 = test_record(); let store1record1 = test_record();
let store1record2 = store1record1.new_child(vec![1, 2, 3]); let store1record2 = store1record1.append(vec![1, 2, 3]);
let store2record1 = test_record(); let store2record1 = test_record();
let store2record2 = store2record1.new_child(vec![1, 2, 3]); let store2record2 = store2record1.append(vec![1, 2, 3]);
let store3record1 = test_record(); let store3record1 = test_record();

View File

@ -12,8 +12,36 @@ pub fn random_bytes<const N: usize>() -> [u8; N] {
ret ret
} }
// basically just ripped from the uuid crate. they have it as unstable, but we can use it fine.
const fn encode_unix_timestamp_millis(millis: u64, random_bytes: &[u8; 10]) -> Uuid {
let millis_high = ((millis >> 16) & 0xFFFF_FFFF) as u32;
let millis_low = (millis & 0xFFFF) as u16;
let random_and_version =
(random_bytes[0] as u16 | ((random_bytes[1] as u16) << 8) & 0x0FFF) | (0x7 << 12);
let mut d4 = [0; 8];
d4[0] = (random_bytes[2] & 0x3F) | 0x80;
d4[1] = random_bytes[3];
d4[2] = random_bytes[4];
d4[3] = random_bytes[5];
d4[4] = random_bytes[6];
d4[5] = random_bytes[7];
d4[6] = random_bytes[8];
d4[7] = random_bytes[9];
Uuid::from_fields(millis_high, millis_low, random_and_version, &d4)
}
pub fn uuid_v7() -> Uuid { pub fn uuid_v7() -> Uuid {
Uuid::now_v7() let bytes = random_bytes();
let now: u64 = u64::try_from(
time::OffsetDateTime::now_utc().unix_timestamp_nanos() / 1_000_000,
)
.expect("Either you're in the past (1970) - or your in the far future (2554). Good for you");
encode_unix_timestamp_millis(now, &bytes)
} }
pub fn uuid_v4() -> String { pub fn uuid_v4() -> String {

View File

@ -14,7 +14,7 @@ use self::{
models::{History, NewHistory, NewSession, NewUser, Session, User}, models::{History, NewHistory, NewSession, NewUser, Session, User},
}; };
use async_trait::async_trait; use async_trait::async_trait;
use atuin_common::record::{EncryptedData, HostId, Record, RecordId, RecordIndex}; use atuin_common::record::{EncryptedData, HostId, Record, RecordIdx, RecordStatus};
use serde::{de::DeserializeOwned, Serialize}; use serde::{de::DeserializeOwned, Serialize};
use time::{Date, Duration, Month, OffsetDateTime, Time, UtcOffset}; use time::{Date, Duration, Month, OffsetDateTime, Time, UtcOffset};
use tracing::instrument; use tracing::instrument;
@ -68,12 +68,12 @@ pub trait Database: Sized + Clone + Send + Sync + 'static {
user: &User, user: &User,
host: HostId, host: HostId,
tag: String, tag: String,
start: Option<RecordId>, start: Option<RecordIdx>,
count: u64, count: u64,
) -> DbResult<Vec<Record<EncryptedData>>>; ) -> DbResult<Vec<Record<EncryptedData>>>;
// Return the tail record ID for each store, so (HostID, Tag, TailRecordID) // Return the tail record ID for each store, so (HostID, Tag, TailRecordID)
async fn tail_records(&self, user: &User) -> DbResult<RecordIndex>; async fn status(&self, user: &User) -> DbResult<RecordStatus>;
async fn count_history_range(&self, user: &User, range: Range<OffsetDateTime>) async fn count_history_range(&self, user: &User, range: Range<OffsetDateTime>)
-> DbResult<i64>; -> DbResult<i64>;

View File

@ -0,0 +1,15 @@
-- Add migration script here
create table store (
id uuid primary key, -- remember to use uuidv7 for happy indices <3
client_id uuid not null, -- I am too uncomfortable with the idea of a client-generated primary key, even though it's fine mathematically
host uuid not null, -- a unique identifier for the host
idx bigint not null, -- the index of the record in this store, identified by (host, tag)
timestamp bigint not null, -- not a timestamp type, as those do not have nanosecond precision
version text not null,
tag text not null, -- what is this? history, kv, whatever. Remember clients get a log per tag per host
data text not null, -- store the actual history data, encrypted. I don't wanna know!
cek text not null,
user_id bigint not null, -- allow multiple users
created_at timestamp not null default current_timestamp
);

View File

@ -0,0 +1,2 @@
-- Add migration script here
create unique index record_uniq ON store(user_id, host, tag, idx);

View File

@ -1,7 +1,7 @@
use std::ops::Range; use std::ops::Range;
use async_trait::async_trait; use async_trait::async_trait;
use atuin_common::record::{EncryptedData, HostId, Record, RecordId, RecordIndex}; use atuin_common::record::{EncryptedData, HostId, Record, RecordIdx, RecordStatus};
use atuin_server_database::models::{History, NewHistory, NewSession, NewUser, Session, User}; use atuin_server_database::models::{History, NewHistory, NewSession, NewUser, Session, User};
use atuin_server_database::{Database, DbError, DbResult}; use atuin_server_database::{Database, DbError, DbResult};
use futures_util::TryStreamExt; use futures_util::TryStreamExt;
@ -11,6 +11,7 @@ use sqlx::Row;
use time::{OffsetDateTime, PrimitiveDateTime, UtcOffset}; use time::{OffsetDateTime, PrimitiveDateTime, UtcOffset};
use tracing::instrument; use tracing::instrument;
use uuid::Uuid;
use wrappers::{DbHistory, DbRecord, DbSession, DbUser}; use wrappers::{DbHistory, DbRecord, DbSession, DbUser};
mod wrappers; mod wrappers;
@ -361,16 +362,16 @@ impl Database for Postgres {
let id = atuin_common::utils::uuid_v7(); let id = atuin_common::utils::uuid_v7();
sqlx::query( sqlx::query(
"insert into records "insert into store
(id, client_id, host, parent, timestamp, version, tag, data, cek, user_id) (id, client_id, host, idx, timestamp, version, tag, data, cek, user_id)
values ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) values ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10)
on conflict do nothing on conflict do nothing
", ",
) )
.bind(id) .bind(id)
.bind(i.id) .bind(i.id)
.bind(i.host) .bind(i.host.id)
.bind(i.parent) .bind(i.idx as i64)
.bind(i.timestamp as i64) // throwing away some data, but i64 is still big in terms of time .bind(i.timestamp as i64) // throwing away some data, but i64 is still big in terms of time
.bind(&i.version) .bind(&i.version)
.bind(&i.tag) .bind(&i.tag)
@ -393,62 +394,69 @@ impl Database for Postgres {
user: &User, user: &User,
host: HostId, host: HostId,
tag: String, tag: String,
start: Option<RecordId>, start: Option<RecordIdx>,
count: u64, count: u64,
) -> DbResult<Vec<Record<EncryptedData>>> { ) -> DbResult<Vec<Record<EncryptedData>>> {
tracing::debug!("{:?} - {:?} - {:?}", host, tag, start); tracing::debug!("{:?} - {:?} - {:?}", host, tag, start);
let mut ret = Vec::with_capacity(count as usize); let start = start.unwrap_or(0);
let mut parent = start;
// yeah let's do something better let records: Result<Vec<DbRecord>, DbError> = sqlx::query_as(
for _ in 0..count { "select client_id, host, idx, timestamp, version, tag, data, cek from store
// a very much not ideal query. but it's simple at least?
// we are basically using postgres as a kv store here, so... maybe consider using an actual
// kv store?
let record: Result<DbRecord, DbError> = sqlx::query_as(
"select client_id, host, parent, timestamp, version, tag, data, cek from records
where user_id = $1 where user_id = $1
and tag = $2 and tag = $2
and host = $3 and host = $3
and parent is not distinct from $4", and idx >= $4
) order by idx asc
.bind(user.id) limit $5",
.bind(tag.clone()) )
.bind(host) .bind(user.id)
.bind(parent) .bind(tag.clone())
.fetch_one(&self.pool) .bind(host)
.await .bind(start as i64)
.map_err(fix_error); .bind(count as i64)
.fetch_all(&self.pool)
.await
.map_err(fix_error);
match record { let ret = match records {
Ok(record) => { Ok(records) => {
let record: Record<EncryptedData> = record.into(); let records: Vec<Record<EncryptedData>> = records
ret.push(record.clone()); .into_iter()
.map(|f| {
let record: Record<EncryptedData> = f.into();
record
})
.collect();
parent = Some(record.id); records
}
Err(DbError::NotFound) => {
tracing::debug!("hit tail of store: {:?}/{}", host, tag);
return Ok(ret);
}
Err(e) => return Err(e),
} }
} Err(DbError::NotFound) => {
tracing::debug!("no records found in store: {:?}/{}", host, tag);
return Ok(vec![]);
}
Err(e) => return Err(e),
};
Ok(ret) Ok(ret)
} }
async fn tail_records(&self, user: &User) -> DbResult<RecordIndex> { async fn status(&self, user: &User) -> DbResult<RecordStatus> {
const TAIL_RECORDS_SQL: &str = "select host, tag, client_id from records rp where (select count(1) from records where parent=rp.client_id and user_id = $1) = 0 and user_id = $1;"; const STATUS_SQL: &str =
"select host, tag, max(idx) from store where user_id = $1 group by host, tag";
let res = sqlx::query_as(TAIL_RECORDS_SQL) let res: Vec<(Uuid, String, i64)> = sqlx::query_as(STATUS_SQL)
.bind(user.id) .bind(user.id)
.fetch(&self.pool) .fetch_all(&self.pool)
.try_collect()
.await .await
.map_err(fix_error)?; .map_err(fix_error)?;
Ok(res) let mut status = RecordStatus::new();
for i in res {
status.set_raw(HostId(i.0), i.1, i.2 as u64);
}
Ok(status)
} }
} }

View File

@ -1,5 +1,5 @@
use ::sqlx::{FromRow, Result}; use ::sqlx::{FromRow, Result};
use atuin_common::record::{EncryptedData, Record}; use atuin_common::record::{EncryptedData, Host, Record};
use atuin_server_database::models::{History, Session, User}; use atuin_server_database::models::{History, Session, User};
use sqlx::{postgres::PgRow, Row}; use sqlx::{postgres::PgRow, Row};
use time::PrimitiveDateTime; use time::PrimitiveDateTime;
@ -51,6 +51,7 @@ impl<'a> ::sqlx::FromRow<'a, PgRow> for DbHistory {
impl<'a> ::sqlx::FromRow<'a, PgRow> for DbRecord { impl<'a> ::sqlx::FromRow<'a, PgRow> for DbRecord {
fn from_row(row: &'a PgRow) -> ::sqlx::Result<Self> { fn from_row(row: &'a PgRow) -> ::sqlx::Result<Self> {
let timestamp: i64 = row.try_get("timestamp")?; let timestamp: i64 = row.try_get("timestamp")?;
let idx: i64 = row.try_get("idx")?;
let data = EncryptedData { let data = EncryptedData {
data: row.try_get("data")?, data: row.try_get("data")?,
@ -59,8 +60,8 @@ impl<'a> ::sqlx::FromRow<'a, PgRow> for DbRecord {
Ok(Self(Record { Ok(Self(Record {
id: row.try_get("client_id")?, id: row.try_get("client_id")?,
host: row.try_get("host")?, host: Host::new(row.try_get("host")?),
parent: row.try_get("parent")?, idx: idx as u64,
timestamp: timestamp as u64, timestamp: timestamp as u64,
version: row.try_get("version")?, version: row.try_get("version")?,
tag: row.try_get("tag")?, tag: row.try_get("tag")?,

View File

@ -26,16 +26,11 @@ rand = { workspace = true }
tokio = { workspace = true } tokio = { workspace = true }
async-trait = { workspace = true } async-trait = { workspace = true }
axum = "0.6.4" axum = "0.6.4"
axum-server = { version = "0.5.1", features = ["tls-rustls"] }
http = "0.2" http = "0.2"
hyper = "0.14"
hyper-rustls = "0.24"
fs-err = { workspace = true } fs-err = { workspace = true }
tower = "0.4" tower = "0.4"
tower-http = { version = "0.4", features = ["trace"] } tower-http = { version = "0.4", features = ["trace"] }
reqwest = { workspace = true } reqwest = { workspace = true }
rustls = "0.21"
rustls-pemfile = "1.0"
argon2 = "0.5.0" argon2 = "0.5.0"
semver = { workspace = true } semver = { workspace = true }
metrics-exporter-prometheus = "0.12.1" metrics-exporter-prometheus = "0.12.1"

View File

@ -27,8 +27,3 @@
# enable = false # enable = false
# host = 127.0.0.1 # host = 127.0.0.1
# port = 9001 # port = 9001
# [tls]
# enable = false
# cert_path = ""
# pkey_path = ""

View File

@ -8,6 +8,7 @@ pub mod history;
pub mod record; pub mod record;
pub mod status; pub mod status;
pub mod user; pub mod user;
pub mod v0;
const VERSION: &str = env!("CARGO_PKG_VERSION"); const VERSION: &str = env!("CARGO_PKG_VERSION");

View File

@ -1,109 +1,47 @@
use axum::{extract::Query, extract::State, Json}; use axum::{extract::State, response::IntoResponse, Json};
use http::StatusCode; use http::StatusCode;
use metrics::counter; use serde_json::json;
use serde::Deserialize; use tracing::instrument;
use tracing::{error, instrument};
use super::{ErrorResponse, ErrorResponseStatus, RespExt}; use super::{ErrorResponse, ErrorResponseStatus, RespExt};
use crate::router::{AppState, UserAuth}; use crate::router::{AppState, UserAuth};
use atuin_server_database::Database; use atuin_server_database::Database;
use atuin_common::record::{EncryptedData, HostId, Record, RecordId, RecordIndex}; use atuin_common::record::{EncryptedData, Record};
#[instrument(skip_all, fields(user.id = user.id))] #[instrument(skip_all, fields(user.id = user.id))]
pub async fn post<DB: Database>( pub async fn post<DB: Database>(
UserAuth(user): UserAuth, UserAuth(user): UserAuth,
state: State<AppState<DB>>,
Json(records): Json<Vec<Record<EncryptedData>>>,
) -> Result<(), ErrorResponseStatus<'static>> { ) -> Result<(), ErrorResponseStatus<'static>> {
let State(AppState { database, settings }) = state; // anyone who has actually used the old record store (a very small number) will see this error
// upon trying to sync.
// 1. The status endpoint will say that the server has nothing
// 2. The client will try to upload local records
// 3. Sync will fail with this error
tracing::debug!( // If the client has no local records, they will see the empty index and do nothing. For the
count = records.len(), // vast majority of users, this is the case.
user = user.username, return Err(
"request to add records" ErrorResponse::reply("record store deprecated; please upgrade")
.with_status(StatusCode::BAD_REQUEST),
); );
counter!("atuin_record_uploaded", records.len() as u64);
let too_big = records
.iter()
.any(|r| r.data.data.len() >= settings.max_record_size || settings.max_record_size == 0);
if too_big {
counter!("atuin_record_too_large", 1);
return Err(
ErrorResponse::reply("could not add records; record too large")
.with_status(StatusCode::BAD_REQUEST),
);
}
if let Err(e) = database.add_records(&user, &records).await {
error!("failed to add record: {}", e);
return Err(ErrorResponse::reply("failed to add record")
.with_status(StatusCode::INTERNAL_SERVER_ERROR));
};
Ok(())
} }
#[instrument(skip_all, fields(user.id = user.id))] #[instrument(skip_all, fields(user.id = user.id))]
pub async fn index<DB: Database>( pub async fn index<DB: Database>(UserAuth(user): UserAuth) -> axum::response::Response {
UserAuth(user): UserAuth, let ret = json!({
state: State<AppState<DB>>, "hosts": {}
) -> Result<Json<RecordIndex>, ErrorResponseStatus<'static>> { });
let State(AppState {
database,
settings: _,
}) = state;
let record_index = match database.tail_records(&user).await { ret.to_string().into_response()
Ok(index) => index,
Err(e) => {
error!("failed to get record index: {}", e);
return Err(ErrorResponse::reply("failed to calculate record index")
.with_status(StatusCode::INTERNAL_SERVER_ERROR));
}
};
Ok(Json(record_index))
}
#[derive(Deserialize)]
pub struct NextParams {
host: HostId,
tag: String,
start: Option<RecordId>,
count: u64,
} }
#[instrument(skip_all, fields(user.id = user.id))] #[instrument(skip_all, fields(user.id = user.id))]
pub async fn next<DB: Database>( pub async fn next<DB: Database>(
params: Query<NextParams>,
UserAuth(user): UserAuth, UserAuth(user): UserAuth,
state: State<AppState<DB>>, _state: State<AppState<DB>>,
) -> Result<Json<Vec<Record<EncryptedData>>>, ErrorResponseStatus<'static>> { ) -> Result<Json<Vec<Record<EncryptedData>>>, ErrorResponseStatus<'static>> {
let State(AppState { let records = Vec::new();
database,
settings: _,
}) = state;
let params = params.0;
let records = match database
.next_records(&user, params.host, params.tag, params.start, params.count)
.await
{
Ok(records) => records,
Err(e) => {
error!("failed to get record index: {}", e);
return Err(ErrorResponse::reply("failed to calculate record index")
.with_status(StatusCode::INTERNAL_SERVER_ERROR));
}
};
Ok(Json(records)) Ok(Json(records))
} }

View File

@ -0,0 +1 @@
pub(crate) mod record;

View File

@ -0,0 +1,111 @@
use axum::{extract::Query, extract::State, Json};
use http::StatusCode;
use metrics::counter;
use serde::Deserialize;
use tracing::{error, instrument};
use crate::{
handlers::{ErrorResponse, ErrorResponseStatus, RespExt},
router::{AppState, UserAuth},
};
use atuin_server_database::Database;
use atuin_common::record::{EncryptedData, HostId, Record, RecordIdx, RecordStatus};
#[instrument(skip_all, fields(user.id = user.id))]
pub async fn post<DB: Database>(
UserAuth(user): UserAuth,
state: State<AppState<DB>>,
Json(records): Json<Vec<Record<EncryptedData>>>,
) -> Result<(), ErrorResponseStatus<'static>> {
let State(AppState { database, settings }) = state;
tracing::debug!(
count = records.len(),
user = user.username,
"request to add records"
);
counter!("atuin_record_uploaded", records.len() as u64);
let too_big = records
.iter()
.any(|r| r.data.data.len() >= settings.max_record_size || settings.max_record_size == 0);
if too_big {
counter!("atuin_record_too_large", 1);
return Err(
ErrorResponse::reply("could not add records; record too large")
.with_status(StatusCode::BAD_REQUEST),
);
}
if let Err(e) = database.add_records(&user, &records).await {
error!("failed to add record: {}", e);
return Err(ErrorResponse::reply("failed to add record")
.with_status(StatusCode::INTERNAL_SERVER_ERROR));
};
Ok(())
}
#[instrument(skip_all, fields(user.id = user.id))]
pub async fn index<DB: Database>(
UserAuth(user): UserAuth,
state: State<AppState<DB>>,
) -> Result<Json<RecordStatus>, ErrorResponseStatus<'static>> {
let State(AppState {
database,
settings: _,
}) = state;
let record_index = match database.status(&user).await {
Ok(index) => index,
Err(e) => {
error!("failed to get record index: {}", e);
return Err(ErrorResponse::reply("failed to calculate record index")
.with_status(StatusCode::INTERNAL_SERVER_ERROR));
}
};
Ok(Json(record_index))
}
#[derive(Deserialize)]
pub struct NextParams {
host: HostId,
tag: String,
start: Option<RecordIdx>,
count: u64,
}
#[instrument(skip_all, fields(user.id = user.id))]
pub async fn next<DB: Database>(
params: Query<NextParams>,
UserAuth(user): UserAuth,
state: State<AppState<DB>>,
) -> Result<Json<Vec<Record<EncryptedData>>>, ErrorResponseStatus<'static>> {
let State(AppState {
database,
settings: _,
}) = state;
let params = params.0;
let records = match database
.next_records(&user, params.host, params.tag, params.start, params.count)
.await
{
Ok(records) => records,
Err(e) => {
error!("failed to get record index: {}", e);
return Err(ErrorResponse::reply("failed to calculate record index")
.with_status(StatusCode::INTERNAL_SERVER_ERROR));
}
};
Ok(Json(records))
}

View File

@ -1,13 +1,10 @@
#![forbid(unsafe_code)] #![forbid(unsafe_code)]
use std::net::SocketAddr;
use std::sync::Arc;
use std::{future::Future, net::TcpListener}; use std::{future::Future, net::TcpListener};
use atuin_server_database::Database; use atuin_server_database::Database;
use axum::Router; use axum::Router;
use axum::Server; use axum::Server;
use axum_server::Handle;
use eyre::{Context, Result}; use eyre::{Context, Result};
mod handlers; mod handlers;
@ -15,7 +12,6 @@ mod metrics;
mod router; mod router;
mod utils; mod utils;
use rustls::ServerConfig;
pub use settings::example_config; pub use settings::example_config;
pub use settings::Settings; pub use settings::Settings;
@ -48,26 +44,27 @@ async fn shutdown_signal() {
pub async fn launch<Db: Database>( pub async fn launch<Db: Database>(
settings: Settings<Db::Settings>, settings: Settings<Db::Settings>,
addr: SocketAddr, host: &str,
port: u16,
) -> Result<()> { ) -> Result<()> {
if settings.tls.enable { launch_with_listener::<Db>(
launch_with_tls::<Db>(settings, addr, shutdown_signal()).await settings,
} else { TcpListener::bind((host, port)).context("could not connect to socket")?,
launch_with_tcp_listener::<Db>( shutdown_signal(),
settings, )
TcpListener::bind(addr).context("could not connect to socket")?, .await
shutdown_signal(),
)
.await
}
} }
pub async fn launch_with_tcp_listener<Db: Database>( pub async fn launch_with_listener<Db: Database>(
settings: Settings<Db::Settings>, settings: Settings<Db::Settings>,
listener: TcpListener, listener: TcpListener,
shutdown: impl Future<Output = ()>, shutdown: impl Future<Output = ()>,
) -> Result<()> { ) -> Result<()> {
let r = make_router::<Db>(settings).await?; let db = Db::new(&settings.db_settings)
.await
.wrap_err_with(|| format!("failed to connect to db: {:?}", settings.db_settings))?;
let r = router::router(db, settings);
Server::from_tcp(listener) Server::from_tcp(listener)
.context("could not launch server")? .context("could not launch server")?
@ -78,40 +75,6 @@ pub async fn launch_with_tcp_listener<Db: Database>(
Ok(()) Ok(())
} }
async fn launch_with_tls<Db: Database>(
settings: Settings<Db::Settings>,
addr: SocketAddr,
shutdown: impl Future<Output = ()>,
) -> Result<()> {
let certificates = settings.tls.certificates()?;
let pkey = settings.tls.private_key()?;
let server_config = ServerConfig::builder()
.with_safe_defaults()
.with_no_client_auth()
.with_single_cert(certificates, pkey)?;
let server_config = Arc::new(server_config);
let rustls_config = axum_server::tls_rustls::RustlsConfig::from_config(server_config);
let r = make_router::<Db>(settings).await?;
let handle = Handle::new();
let server = axum_server::bind_rustls(addr, rustls_config)
.handle(handle.clone())
.serve(r.into_make_service());
tokio::select! {
_ = server => {}
_ = shutdown => {
handle.graceful_shutdown(None);
}
}
Ok(())
}
// The separate listener means it's much easier to ensure metrics are not accidentally exposed to // The separate listener means it's much easier to ensure metrics are not accidentally exposed to
// the public. // the public.
pub async fn launch_metrics_server(host: String, port: u16) -> Result<()> { pub async fn launch_metrics_server(host: String, port: u16) -> Result<()> {
@ -132,13 +95,3 @@ pub async fn launch_metrics_server(host: String, port: u16) -> Result<()> {
Ok(()) Ok(())
} }
async fn make_router<Db: Database>(
settings: Settings<<Db as Database>::Settings>,
) -> Result<Router, eyre::Error> {
let db = Db::new(&settings.db_settings)
.await
.wrap_err_with(|| format!("failed to connect to db: {:?}", settings.db_settings))?;
let r = router::router(db, settings);
Ok(r)
}

View File

@ -1,5 +1,5 @@
use async_trait::async_trait; use async_trait::async_trait;
use atuin_common::api::{ErrorResponse, ATUIN_CARGO_VERSION, ATUIN_HEADER_VERSION}; use atuin_common::api::ErrorResponse;
use axum::{ use axum::{
extract::FromRequestParts, extract::FromRequestParts,
http::Request, http::Request,
@ -91,16 +91,6 @@ async fn clacks_overhead<B>(request: Request<B>, next: Next<B>) -> Response {
response response
} }
/// Ensure that we only try and sync with clients on the same major version
async fn semver<B>(request: Request<B>, next: Next<B>) -> Response {
let mut response = next.run(request).await;
response
.headers_mut()
.insert(ATUIN_HEADER_VERSION, ATUIN_CARGO_VERSION.parse().unwrap());
response
}
#[derive(Clone)] #[derive(Clone)]
pub struct AppState<DB: Database> { pub struct AppState<DB: Database> {
pub database: DB, pub database: DB,
@ -110,19 +100,22 @@ pub struct AppState<DB: Database> {
pub fn router<DB: Database>(database: DB, settings: Settings<DB::Settings>) -> Router { pub fn router<DB: Database>(database: DB, settings: Settings<DB::Settings>) -> Router {
let routes = Router::new() let routes = Router::new()
.route("/", get(handlers::index)) .route("/", get(handlers::index))
.route("/account", delete(handlers::user::delete))
.route("/register", post(handlers::user::register))
.route("/login", post(handlers::user::login))
.route("/sync/count", get(handlers::history::count)) .route("/sync/count", get(handlers::history::count))
.route("/sync/history", get(handlers::history::list)) .route("/sync/history", get(handlers::history::list))
.route("/sync/calendar/:focus", get(handlers::history::calendar)) .route("/sync/calendar/:focus", get(handlers::history::calendar))
.route("/sync/status", get(handlers::status::status)) .route("/sync/status", get(handlers::status::status))
.route("/history", post(handlers::history::add)) .route("/history", post(handlers::history::add))
.route("/history", delete(handlers::history::delete)) .route("/history", delete(handlers::history::delete))
.route("/record", post(handlers::record::post)) .route("/record", post(handlers::record::post::<DB>))
.route("/record", get(handlers::record::index)) .route("/record", get(handlers::record::index::<DB>))
.route("/record/next", get(handlers::record::next)) .route("/record/next", get(handlers::record::next))
.route("/user/:username", get(handlers::user::get)) .route("/user/:username", get(handlers::user::get))
.route("/account", delete(handlers::user::delete)) .route("/api/v0/record", post(handlers::v0::record::post))
.route("/register", post(handlers::user::register)) .route("/api/v0/record", get(handlers::v0::record::index))
.route("/login", post(handlers::user::login)); .route("/api/v0/record/next", get(handlers::v0::record::next));
let path = settings.path.as_str(); let path = settings.path.as_str();
if path.is_empty() { if path.is_empty() {
@ -136,7 +129,6 @@ pub fn router<DB: Database>(database: DB, settings: Settings<DB::Settings>) -> R
ServiceBuilder::new() ServiceBuilder::new()
.layer(axum::middleware::from_fn(clacks_overhead)) .layer(axum::middleware::from_fn(clacks_overhead))
.layer(TraceLayer::new_for_http()) .layer(TraceLayer::new_for_http())
.layer(axum::middleware::from_fn(metrics::track_metrics)) .layer(axum::middleware::from_fn(metrics::track_metrics)),
.layer(axum::middleware::from_fn(semver)),
) )
} }

View File

@ -1,7 +1,7 @@
use std::{io::prelude::*, path::PathBuf}; use std::{io::prelude::*, path::PathBuf};
use config::{Config, Environment, File as ConfigFile, FileFormat}; use config::{Config, Environment, File as ConfigFile, FileFormat};
use eyre::{bail, eyre, Context, Result}; use eyre::{eyre, Result};
use fs_err::{create_dir_all, File}; use fs_err::{create_dir_all, File};
use serde::{de::DeserializeOwned, Deserialize, Serialize}; use serde::{de::DeserializeOwned, Deserialize, Serialize};
@ -36,7 +36,6 @@ pub struct Settings<DbSettings> {
pub register_webhook_url: Option<String>, pub register_webhook_url: Option<String>,
pub register_webhook_username: String, pub register_webhook_username: String,
pub metrics: Metrics, pub metrics: Metrics,
pub tls: Tls,
#[serde(flatten)] #[serde(flatten)]
pub db_settings: DbSettings, pub db_settings: DbSettings,
@ -68,9 +67,6 @@ impl<DbSettings: DeserializeOwned> Settings<DbSettings> {
.set_default("metrics.enable", false)? .set_default("metrics.enable", false)?
.set_default("metrics.host", "127.0.0.1")? .set_default("metrics.host", "127.0.0.1")?
.set_default("metrics.port", 9001)? .set_default("metrics.port", 9001)?
.set_default("tls.enable", false)?
.set_default("tls.cert_path", "")?
.set_default("tls.key_path", "")?
.add_source( .add_source(
Environment::with_prefix("atuin") Environment::with_prefix("atuin")
.prefix_separator("_") .prefix_separator("_")
@ -101,51 +97,3 @@ impl<DbSettings: DeserializeOwned> Settings<DbSettings> {
pub fn example_config() -> &'static str { pub fn example_config() -> &'static str {
EXAMPLE_CONFIG EXAMPLE_CONFIG
} }
#[derive(Clone, Debug, Default, Deserialize, Serialize)]
pub struct Tls {
pub enable: bool,
pub cert_path: PathBuf,
pub pkey_path: PathBuf,
}
impl Tls {
pub fn certificates(&self) -> Result<Vec<rustls::Certificate>> {
let cert_file = std::fs::File::open(&self.cert_path)
.with_context(|| format!("tls.cert_path {:?} is missing", self.cert_path))?;
let mut reader = std::io::BufReader::new(cert_file);
let certs: Vec<_> = rustls_pemfile::certs(&mut reader)
.with_context(|| format!("tls.cert_path {:?} is invalid", self.cert_path))?
.into_iter()
.map(rustls::Certificate)
.collect();
if certs.is_empty() {
bail!(
"tls.cert_path {:?} must have at least one certificate",
self.cert_path
);
}
Ok(certs)
}
pub fn private_key(&self) -> Result<rustls::PrivateKey> {
let pkey_file = std::fs::File::open(&self.pkey_path)
.with_context(|| format!("tls.pkey_path {:?} is missing", self.pkey_path))?;
let mut reader = std::io::BufReader::new(pkey_file);
let keys = rustls_pemfile::pkcs8_private_keys(&mut reader)
.with_context(|| format!("tls.pkey_path {:?} is not PKCS8-encoded", self.pkey_path))?;
if keys.is_empty() {
bail!(
"tls.pkey_path {:?} must have at least one private key",
self.pkey_path
);
}
let key = rustls::PrivateKey(keys[0].clone());
Ok(key)
}
}

View File

@ -82,4 +82,4 @@ features = ["ansi", "fmt", "registry", "env-filter"]
optional = true optional = true
[dev-dependencies] [dev-dependencies]
tracing-tree = "0.3" tracing-tree = "0.2"

View File

@ -16,6 +16,7 @@ mod config;
mod history; mod history;
mod import; mod import;
mod kv; mod kv;
mod record;
mod search; mod search;
mod stats; mod stats;
@ -46,26 +47,17 @@ pub enum Cmd {
#[command(subcommand)] #[command(subcommand)]
Kv(kv::Cmd), Kv(kv::Cmd),
#[command(subcommand)]
Record(record::Cmd),
/// Print example configuration /// Print example configuration
#[command()] #[command()]
DefaultConfig, DefaultConfig,
} }
impl Cmd { impl Cmd {
pub fn run(self) -> Result<()> { #[tokio::main(flavor = "current_thread")]
let runtime = tokio::runtime::Builder::new_current_thread() pub async fn run(self) -> Result<()> {
.enable_all()
.build()
.unwrap();
let res = runtime.block_on(self.run_inner());
runtime.shutdown_timeout(std::time::Duration::from_millis(50));
res
}
async fn run_inner(self) -> Result<()> {
Builder::new() Builder::new()
.filter_level(log::LevelFilter::Off) .filter_level(log::LevelFilter::Off)
.parse_env("ATUIN_LOG") .parse_env("ATUIN_LOG")
@ -82,7 +74,7 @@ impl Cmd {
let mut store = SqliteStore::new(record_store_path).await?; let mut store = SqliteStore::new(record_store_path).await?;
match self { match self {
Self::History(history) => history.run(&settings, &db).await, Self::History(history) => history.run(&settings, &db, store).await,
Self::Import(import) => import.run(&db).await, Self::Import(import) => import.run(&db).await,
Self::Stats(stats) => stats.run(&db, &settings).await, Self::Stats(stats) => stats.run(&db, &settings).await,
Self::Search(search) => search.run(db, &mut settings).await, Self::Search(search) => search.run(db, &mut settings).await,
@ -95,6 +87,8 @@ impl Cmd {
Self::Kv(kv) => kv.run(&settings, &mut store).await, Self::Kv(kv) => kv.run(&settings, &mut store).await,
Self::Record(record) => record.run(&settings, &store).await,
Self::DefaultConfig => { Self::DefaultConfig => {
config::run(); config::run();
Ok(()) Ok(())

View File

@ -12,7 +12,9 @@ use runtime_format::{FormatKey, FormatKeyError, ParseSegment, ParsedFmt};
use atuin_client::{ use atuin_client::{
database::{current_context, Database}, database::{current_context, Database},
history::History, encryption,
history::{store::HistoryStore, History},
record::sqlite_store::SqliteStore,
settings::Settings, settings::Settings,
}; };
@ -264,11 +266,13 @@ impl Cmd {
// we use this as the key for calling end // we use this as the key for calling end
println!("{}", h.id); println!("{}", h.id);
db.save(&h).await?; db.save(&h).await?;
Ok(()) Ok(())
} }
async fn handle_end( async fn handle_end(
db: &impl Database, db: &impl Database,
store: HistoryStore,
settings: &Settings, settings: &Settings,
id: &str, id: &str,
exit: i64, exit: i64,
@ -294,6 +298,7 @@ impl Cmd {
.context("command took over 292 years")?; .context("command took over 292 years")?;
db.update(&h).await?; db.update(&h).await?;
store.push(&h).await?;
if settings.should_sync()? { if settings.should_sync()? {
#[cfg(feature = "sync")] #[cfg(feature = "sync")]
@ -361,12 +366,26 @@ impl Cmd {
Ok(()) Ok(())
} }
pub async fn run(self, settings: &Settings, db: &impl Database) -> Result<()> { pub async fn run(
self,
settings: &Settings,
db: &impl Database,
store: SqliteStore,
) -> Result<()> {
let context = current_context(); let context = current_context();
let encryption_key: [u8; 32] = encryption::load_key(settings)
.context("could not load encryption key")?
.into();
let host_id = Settings::host_id().expect("failed to get host_id");
let history_store = HistoryStore::new(store, host_id, encryption_key);
match self { match self {
Self::Start { command } => Self::handle_start(db, settings, &command).await, Self::Start { command } => Self::handle_start(db, settings, &command).await,
Self::End { id, exit } => Self::handle_end(db, settings, &id, exit).await, Self::End { id, exit } => {
Self::handle_end(db, history_store, settings, &id, exit).await
}
Self::List { Self::List {
session, session,
cwd, cwd,

View File

@ -0,0 +1,63 @@
use clap::Subcommand;
use eyre::Result;
use atuin_client::{record::store::Store, settings::Settings};
use time::OffsetDateTime;
#[derive(Subcommand, Debug)]
#[command(infer_subcommands = true)]
pub enum Cmd {
Status,
}
impl Cmd {
pub async fn run(
&self,
_settings: &Settings,
store: &(impl Store + Send + Sync),
) -> Result<()> {
let host_id = Settings::host_id().expect("failed to get host_id");
let status = store.status().await?;
// TODO: should probs build some data structure and then pretty-print it or smth
for (host, st) in &status.hosts {
let host_string = if host == &host_id {
format!("host: {} <- CURRENT HOST", host.0.as_hyphenated())
} else {
format!("host: {}", host.0.as_hyphenated())
};
println!("{host_string}");
for (tag, idx) in st {
println!("\tstore: {tag}");
let first = store.first(*host, tag).await?;
let last = store.last(*host, tag).await?;
println!("\t\tidx: {idx}");
if let Some(first) = first {
println!("\t\tfirst: {}", first.id.0.as_hyphenated());
let time =
OffsetDateTime::from_unix_timestamp_nanos(i128::from(first.timestamp))?;
println!("\t\t\tcreated: {time}");
}
if let Some(last) = last {
println!("\t\tlast: {}", last.id.0.as_hyphenated());
let time =
OffsetDateTime::from_unix_timestamp_nanos(i128::from(last.timestamp))?;
println!("\t\t\tcreated: {time}");
}
}
println!();
}
Ok(())
}
}

View File

@ -104,18 +104,6 @@ pub struct Cmd {
impl Cmd { impl Cmd {
pub async fn run(self, db: impl Database, settings: &mut Settings) -> Result<()> { pub async fn run(self, db: impl Database, settings: &mut Settings) -> Result<()> {
if (self.delete_it_all || self.delete) && self.limit.is_some() {
// Because of how deletion is implemented, it will always delete all matches
// and disregard the limit option. It is also not clear what deletion with a
// limit would even mean. Deleting the LIMIT most recent entries that match
// the search query would make sense, but that wouldn't match what's displayed
// when running the equivalent search, but deleting those entries that are
// displayed with the search would leave any duplicates of those lines which may
// or may not have been intended to be deleted.
println!("\"--limit\" is not compatible with deletion.");
return Ok(());
}
if self.delete && self.query.is_empty() { if self.delete && self.query.is_empty() {
println!("Please specify a query to match the items you wish to delete. If you wish to delete all history, pass --delete-it-all"); println!("Please specify a query to match the items you wish to delete. If you wish to delete all history, pass --delete-it-all");
return Ok(()); return Ok(());

View File

@ -71,19 +71,35 @@ impl Cmd {
} }
} }
async fn run_new_sync(
settings: &Settings,
store: &mut (impl Store + Send + Sync),
) -> Result<(i64, i64)> {
let (diff, _) = sync::diff(settings, store).await?;
let operations = sync::operations(diff, store).await?;
let res = sync::sync_remote(operations, store, settings).await?;
Ok(res)
}
async fn run( async fn run(
settings: &Settings, settings: &Settings,
force: bool, force: bool,
db: &impl Database, db: &impl Database,
store: &mut (impl Store + Send + Sync), store: &mut (impl Store + Send + Sync),
) -> Result<()> { ) -> Result<()> {
let (diff, remote_index) = sync::diff(settings, store).await?; println!("Running record store sync...");
let operations = sync::operations(diff, store).await?; let res = run_new_sync(settings, store).await;
let (uploaded, downloaded) =
sync::sync_remote(operations, &remote_index, store, settings).await?;
println!("{uploaded}/{downloaded} up/down to record store"); if let Ok((uploaded, downloaded)) = res {
println!("{uploaded}/{downloaded} up/down to record store");
} else {
println!("Error while contacting record store: {res:?}\n");
println!("Please ensure client + server are both up to date\n");
}
println!("Running old history sync...");
atuin_client::sync::sync(settings, force, db).await?; atuin_client::sync::sync(settings, force, db).await?;
println!( println!(

View File

@ -1,5 +1,3 @@
use std::net::SocketAddr;
use atuin_server_postgres::Postgres; use atuin_server_postgres::Postgres;
use tracing_subscriber::{fmt, prelude::*, EnvFilter}; use tracing_subscriber::{fmt, prelude::*, EnvFilter};
@ -41,7 +39,6 @@ impl Cmd {
let settings = Settings::new().wrap_err("could not load server settings")?; let settings = Settings::new().wrap_err("could not load server settings")?;
let host = host.as_ref().unwrap_or(&settings.host).clone(); let host = host.as_ref().unwrap_or(&settings.host).clone();
let port = port.unwrap_or(settings.port); let port = port.unwrap_or(settings.port);
let addr = SocketAddr::new(host.parse()?, port);
if settings.metrics.enable { if settings.metrics.enable {
tokio::spawn(launch_metrics_server( tokio::spawn(launch_metrics_server(
@ -50,7 +47,7 @@ impl Cmd {
)); ));
} }
launch::<Postgres>(settings, addr).await launch::<Postgres>(settings, &host, port).await
} }
Self::DefaultConfig => { Self::DefaultConfig => {
println!("{}", example_config()); println!("{}", example_config());

View File

@ -0,0 +1,37 @@
use std::time::Duration;
use atuin_client::{history::History, settings::Settings};
use crossterm::event::KeyEvent;
use ratatui::{
buffer::Buffer,
layout::Rect,
prelude::{Alignment, Backend, Constraint, Direction, Layout},
style::{Color, Modifier, Style, Styled},
text::{Span, Text},
widgets::{Block, Paragraph, StatefulWidget, Widget},
Frame,
};
use time::OffsetDateTime;
use crate::utils::duration::format_duration;
use super::search::State;
pub fn draw_inspector(f: &mut Frame, chunk: Rect, history: &History) {
let layout = Layout::new()
.direction(Direction::Vertical)
.constraints([Constraint::Ratio(1, 4)])
.split(chunk);
let command = Paragraph::new(Text::from(Span::styled(
history.command.as_str(),
Style::default(),
)))
.alignment(Alignment::Center);
f.render_widget(command, layout[0]);
}
// I'm going to break this out more, but just starting to move things around before changing
// structure and making it nicer.
pub fn inspector_input(state: &mut State, settings: &Settings, input: &KeyEvent) {}

View File

@ -90,6 +90,11 @@ __atuin_history() {
then then
HISTORY=${HISTORY#__atuin_accept__:} HISTORY=${HISTORY#__atuin_accept__:}
# Reprint the prompt, accounting for multiple lines
# shellcheck disable=SC2046
tput cuu $(echo -n "${PS1@P}" | tr -cd '\n' | wc -c)
echo "${PS1@P}$HISTORY"
if [[ -n "${BLE_ATTACHED-}" ]]; then if [[ -n "${BLE_ATTACHED-}" ]]; then
ble-edit/content/reset-and-check-dirty "$HISTORY" ble-edit/content/reset-and-check-dirty "$HISTORY"
ble/widget/accept-line ble/widget/accept-line

View File

@ -2,7 +2,7 @@ use std::{env, net::TcpListener, time::Duration};
use atuin_client::api_client; use atuin_client::api_client;
use atuin_common::{api::AddHistoryRequest, utils::uuid_v7}; use atuin_common::{api::AddHistoryRequest, utils::uuid_v7};
use atuin_server::{launch_with_tcp_listener, Settings as ServerSettings}; use atuin_server::{launch_with_listener, Settings as ServerSettings};
use atuin_server_postgres::{Postgres, PostgresSettings}; use atuin_server_postgres::{Postgres, PostgresSettings};
use futures_util::TryFutureExt; use futures_util::TryFutureExt;
use time::OffsetDateTime; use time::OffsetDateTime;
@ -38,7 +38,6 @@ async fn start_server(path: &str) -> (String, oneshot::Sender<()>, JoinHandle<()
register_webhook_username: String::new(), register_webhook_username: String::new(),
db_settings: PostgresSettings { db_uri }, db_settings: PostgresSettings { db_uri },
metrics: atuin_server::settings::Metrics::default(), metrics: atuin_server::settings::Metrics::default(),
tls: atuin_server::settings::Tls::default(),
}; };
let (shutdown_tx, shutdown_rx) = tokio::sync::oneshot::channel(); let (shutdown_tx, shutdown_rx) = tokio::sync::oneshot::channel();
@ -47,7 +46,7 @@ async fn start_server(path: &str) -> (String, oneshot::Sender<()>, JoinHandle<()
let server = tokio::spawn(async move { let server = tokio::spawn(async move {
let _tracing_guard = dispatcher::set_default(&dispatch); let _tracing_guard = dispatcher::set_default(&dispatch);
if let Err(e) = launch_with_tcp_listener::<Postgres>( if let Err(e) = launch_with_listener::<Postgres>(
server_settings, server_settings,
listener, listener,
shutdown_rx.unwrap_or_else(|_| ()), shutdown_rx.unwrap_or_else(|_| ()),

View File

@ -66,16 +66,6 @@ pacman -S atuin
And then follow [the shell setup](#shell-plugin) And then follow [the shell setup](#shell-plugin)
### Xbps
Atuin is available in the Void Linux [repository](https://github.com/void-linux/void-packages/tree/master/srcpkgs/atuin):
```
sudo xbps-install atuin
```
And then follow [the shell setup](#shell-plugin)
### Termux ### Termux
Atuin is available in the Termux package repository: Atuin is available in the Termux package repository:

View File

@ -31,7 +31,7 @@ Please do try and read this guide, but if you're in a hurry and want to get
started quickly: started quickly:
``` ```
bash <(curl --proto '=https' --tlsv1.2 -sSf https://setup.atuin.sh) bash <(curl https://raw.githubusercontent.com/ellie/atuin/main/install.sh)
atuin register -u <USERNAME> -e <EMAIL> atuin register -u <USERNAME> -e <EMAIL>
atuin import auto atuin import auto
@ -53,20 +53,10 @@ bindings](https://atuin.sh/docs/key-binding#disable-up-arrow)
Let's get started! First up, you will want to install Atuin. We have an install Let's get started! First up, you will want to install Atuin. We have an install
script which handles most of the commonly used platforms and package managers: script which handles most of the commonly used platforms and package managers:
## bash/zsh
``` ```
bash <(curl --proto '=https' --tlsv1.2 -sSf https://setup.atuin.sh) bash <(curl https://raw.githubusercontent.com/ellie/atuin/main/install.sh)
``` ```
## fish
```
bash (curl --proto '=https' --tlsv1.2 -sSf https://setup.atuin.sh | psub)
```
## Importing
The script will install the binary and attempt to configure your shell. Atuin The script will install the binary and attempt to configure your shell. Atuin
uses a shell plugin to ensure that we capture new shell history. But for older uses a shell plugin to ensure that we capture new shell history. But for older
history, you will need to import it history, you will need to import it

View File

@ -39,14 +39,3 @@ ATUIN_DB_URI="postgres://user:password@hostname/database"
| `db_uri` | A valid PostgreSQL URI, for saving history (default: false) | | `db_uri` | A valid PostgreSQL URI, for saving history (default: false) |
| `path` | A path to prepend to all routes of the server (default: false) | | `path` | A path to prepend to all routes of the server (default: false) |
### TLS
The server supports TLS through the `[tls]` section:
```toml
[tls]
enabled = true
cert_path = "/path/to/letsencrypt/live/fully.qualified.domain/fullchain.pem"
pkey_path = "/path/to/letsencrypt/live/fully.qualified.domain/privkey.pem"
```

View File

@ -183,7 +183,7 @@ esac
# TODO: Check which shell is in use # TODO: Check which shell is in use
# Use of single quotes around $() is intentional here # Use of single quotes around $() is intentional here
# shellcheck disable=SC2016 # shellcheck disable=SC2016
if ! grep -q "atuin init zsh" "${ZDOTDIR:-$HOME}/.zshrc"; then if ! grep -q "atuin init zsh" ~/.zshrc; then
printf '\neval "$(atuin init zsh)"\n' >> "${ZDOTDIR:-$HOME}/.zshrc" printf '\neval "$(atuin init zsh)"\n' >> "${ZDOTDIR:-$HOME}/.zshrc"
fi fi