* Add change-password command & support on server
* Add a test for password change
* review: run format
---------
Co-authored-by: Ellie Huxtable <ellie@elliehuxtable.com>
The columns referred to in this PR, were for some reason created with
defaults. When created years ago, they were `bigserial` not `bigint`.
The defaults were never actually used, as verified by
1. Checking the value of the sequences on the database
2. Checking the code
So we're safe to clean them up.
* feat: rework record sync for improved reliability
So, to tell a story
1. We introduced the record sync, intended to be the new algorithm to
sync history.
2. On top of this, I added the KV store. This was intended as a simple
test of the record sync, and to see if people wanted that sort of
functionality
3. History remained syncing via the old means, as while it had issues it
worked more-or-less OK. And we are aware of its flaws
4. If KV syncing worked ok, history would be moved across
KV syncing ran ok for 6mo or so, so I started to move across history.
For several weeks, I ran a local fork of Atuin + the server that synced
via records instead.
The record store maintained ordering via a linked list, which was a
mistake. It performed well in testing, but was really difficult to debug
and reason about. So when a few small sync issues occured, they took an
extremely long time to debug.
This PR is huge, which I regret. It involves replacing the "parent"
relationship that records once had (pointing to the previous record)
with a simple index (generally referred to as idx). This also means we
had to change the recordindex, which referenced "tails". Tails were the
last item in the chain.
Now that we use an "array" vs linked list, that logic was also replaced.
And is much simpler :D
Same for the queries that act on this data.
----
This isn't final - we still need to add
1. Proper server/client error handling, which has been lacking for a
while
2. The actual history implementation on top
This exists in a branch, just without deletions. Won't be much to
add that, I just don't want to make this any larger than it already
is
The _only_ caveat here is that we basically lose data synced via the old
record store. This is the KV data from before.
It hasn't been deleted or anything, just no longer hooked up. So it's
totally possible to write a migration script. I just need to do that.
* update .gitignore
* use correct endpoint
* fix for stores with length of 1
* use create/delete enum for history store
* lint, remove unneeded host_id
* remove prints
* add command to import old history
* add enable/disable switch for record sync
* add record sync to auto sync
* satisfy the almighty clippy
* remove file that I did not mean to commit
* feedback
* Add record migration
* Add database functions for inserting history
No real tests yet :( I would like to avoid running postgres lol
* Add index handler, use UUIDs not strings
* Fix a bunch of tests, remove Option<Uuid>
* Add tests, all passing
* Working upload sync
* Record downloading works
* Sync download works
* Don't waste requests
* Use a page size for uploads, make it variable later
* Aaaaaand they're encrypted now too
* Add cek
* Allow reading tail across hosts
* Revert "Allow reading tail across hosts"
Not like that
This reverts commit 7b0c72e7e0.
* Handle multiple shards properly
* format
* Format and make clippy happy
* use some fancy types (#1098)
* use some fancy types
* fmt
* Goodbye horrible tuple
* Update atuin-server-postgres/migrations/20230623070418_records.sql
Co-authored-by: Conrad Ludgate <conradludgate@gmail.com>
* fmt
* Sort tests too because time sucks
* fix features
---------
Co-authored-by: Conrad Ludgate <conradludgate@gmail.com>