* chore: switch to cargo dist for releases
From https://axo.dev
cargo-dist handles building releases far better than we can, and do so
for several large projects now.
We will need to change our install script to use the cargo-dist
installer.
Historically, we have used the system package manager wherever possible.
Once switched to the new installer, this will no longer be the case. If
the user wishes to use their package manager, and Atuin is maintained
there, then they can choose to do so.
This way, we can ensure that users are running a known build, can easily
uninstall (just delete the atuin dir), easily update, etc. Builds will
use our lockfile, and can have their checksum verified. Later, I'd like
to introduce build signing.
As Axo are focused on release engineering, they will likely have
resolved many more issues than we have - libc versions, etc.
I'm not particularly happy with our response of "just use your package
manager", as many users seem to have difficulty there. It's unclear what
our installer has done, as this behaviour varies massively across
systems. It's also unclear how some package maintainers may have patched
things
I'm hoping that some better release tooling will lead to more confidence
in the process, and therefore more frequent releases.
Uninstall clarity: #111, #372, #640, #1485, #1546, #2049, #1529
* config
* add protobuf
* test build
* use native arm mac
* lol
* add toolchain
* use 1.78, 2vcpu
* nix flake update
* 1.77
This avoids issues with clients attempting to connect to the daemon
while it's starting, systemd creates the socket early and will queue
connections up until the daemon is ready to accept them.
* init daemon crate
* wip
* minimal functioning daemon, needs cleanup for sure
* better errors
* add signal cleanup
* logging
* things
* add sync worker
* move daemon crate
* 30s -> 5mins
* make clippy happy
* fix stuff maybe?
* fmt
* trim packages
* rate limit fix
* more protoc huh
* this makes no sense, why linux why
* can it install literally just curl
* windows in ci is slow, and all the newer things will not work there. disable the daemon feature and it will build
* add daemon feature
* maybe this
* ok wut where is protoc
* try setting protoc
* hm
* try copying protoc
* remove optional
* add cross config
* idk nix
* does nix want this?
* some random pkg I found does this
* uh oh
* hack, be gone!
* update contributing
* fix(dotfiles): allow clearing aliases, disable import
At the moment there are far too many edge cases to handle importing
aliases.
1. We need an interactive shell to print aliases. Without it, most
shells won't report much.
2. Many people have their shells print things on startup (graphics,
fortunes, etc). This could be detected as an attempt to set an alias.
Rather than spend the next year finding import edge cases, I'm
disabling it for now. There's probably a better way we can do this?
* clippy
* initial
* ui things
* cargo
* update, add history refresh button
* history page a bit better, add initial dotfiles page
* re-org layout
* bye squigglies
* add dotfiles ui, show aliases
* add default shell detection
* put stats in a little drawer, alias import changes
* use new table for aliases, add alias deleting
* support adding aliases
* close drawer when added, no alias autocomplete
* clippy, format
* attempt to ensure gdk is installed ok
* sudo
* no linux things on mac ffs
* I forgot we build for windows too... end of day
* remove tauri backend from workspace
* feat: support syncing aliases
This is definitely not yet finished, but works for zsh right now.
TODO:
1. Support other shells
2. Cache the alias generation, so we don't have to do a bunch of work at
shell init time
* correct imports
* fix clippy errors
* fix tests
* add the other shells
* support xonsh
* add delete
* update rust, then make clippy happy once more
* omfg fmt too
* feat: add record re-encrypting
* automatically re-encrypt store when logging in with a different key
* fix
* actually save the new key lmao
* add rekey
* save new key
* decode bip key
* "add test for sqlite store re encrypt"
* feat: rework record sync for improved reliability
So, to tell a story
1. We introduced the record sync, intended to be the new algorithm to
sync history.
2. On top of this, I added the KV store. This was intended as a simple
test of the record sync, and to see if people wanted that sort of
functionality
3. History remained syncing via the old means, as while it had issues it
worked more-or-less OK. And we are aware of its flaws
4. If KV syncing worked ok, history would be moved across
KV syncing ran ok for 6mo or so, so I started to move across history.
For several weeks, I ran a local fork of Atuin + the server that synced
via records instead.
The record store maintained ordering via a linked list, which was a
mistake. It performed well in testing, but was really difficult to debug
and reason about. So when a few small sync issues occured, they took an
extremely long time to debug.
This PR is huge, which I regret. It involves replacing the "parent"
relationship that records once had (pointing to the previous record)
with a simple index (generally referred to as idx). This also means we
had to change the recordindex, which referenced "tails". Tails were the
last item in the chain.
Now that we use an "array" vs linked list, that logic was also replaced.
And is much simpler :D
Same for the queries that act on this data.
----
This isn't final - we still need to add
1. Proper server/client error handling, which has been lacking for a
while
2. The actual history implementation on top
This exists in a branch, just without deletions. Won't be much to
add that, I just don't want to make this any larger than it already
is
The _only_ caveat here is that we basically lose data synced via the old
record store. This is the KV data from before.
It hasn't been deleted or anything, just no longer hooked up. So it's
totally possible to write a migration script. I just need to do that.
* update .gitignore
* use correct endpoint
* fix for stores with length of 1
* use create/delete enum for history store
* lint, remove unneeded host_id
* remove prints
* add command to import old history
* add enable/disable switch for record sync
* add record sync to auto sync
* satisfy the almighty clippy
* remove file that I did not mean to commit
* feedback
* Add record migration
* Add database functions for inserting history
No real tests yet :( I would like to avoid running postgres lol
* Add index handler, use UUIDs not strings
* Fix a bunch of tests, remove Option<Uuid>
* Add tests, all passing
* Working upload sync
* Record downloading works
* Sync download works
* Don't waste requests
* Use a page size for uploads, make it variable later
* Aaaaaand they're encrypted now too
* Add cek
* Allow reading tail across hosts
* Revert "Allow reading tail across hosts"
Not like that
This reverts commit 7b0c72e7e0.
* Handle multiple shards properly
* format
* Format and make clippy happy
* use some fancy types (#1098)
* use some fancy types
* fmt
* Goodbye horrible tuple
* Update atuin-server-postgres/migrations/20230623070418_records.sql
Co-authored-by: Conrad Ludgate <conradludgate@gmail.com>
* fmt
* Sort tests too because time sucks
* fix features
---------
Co-authored-by: Conrad Ludgate <conradludgate@gmail.com>
* wip
* Start testing
* Store host IDs, not hostnames
Why? Hostnames can change a lot, and therefore host filtering can be
funky. Really, all we want is a unique ID per machine + do not care what
it might be.
* Mostly just write a fuckload of tests
* Add a v0 kv store I can push to
* Appending works
* Add next() and iterate, test the pointer chain
* Fix sig
* Make clippy happy and thaw the ICE
* Fix tests'
* Fix tests
* typed builder and cleaner db trait
---------
Co-authored-by: Conrad Ludgate <conrad.ludgate@truelayer.com>