So, to tell a story
1. We introduced the record sync, intended to be the new algorithm to
sync history.
2. On top of this, I added the KV store. This was intended as a simple
test of the record sync, and to see if people wanted that sort of
functionality
3. History remained syncing via the old means, as while it had issues it
worked more-or-less OK. And we are aware of its flaws
4. If KV syncing worked ok, history would be moved across
KV syncing ran ok for 6mo or so, so I started to move across history.
For several weeks, I ran a local fork of Atuin + the server that synced
via records instead.
The record store maintained ordering via a linked list, which was a
mistake. It performed well in testing, but was really difficult to debug
and reason about. So when a few small sync issues occured, they took an
extremely long time to debug.
This PR is huge, which I regret. It involves replacing the "parent"
relationship that records once had (pointing to the previous record)
with a simple index (generally referred to as idx). This also means we
had to change the recordindex, which referenced "tails". Tails were the
last item in the chain.
Now that we use an "array" vs linked list, that logic was also replaced.
And is much simpler :D
Same for the queries that act on this data.
----
This isn't final - we still need to add
1. Proper server/client error handling, which has been lacking for a
while
2. The actual history implementation on top
This exists in a branch, just without deletions. Won't be much to
add that, I just don't want to make this any larger than it already
is
The _only_ caveat here is that we basically lose data synced via the old
record store. This is the KV data from before.
It hasn't been deleted or anything, just no longer hooked up. So it's
totally possible to write a migration script. I just need to do that.
My self hosted runner is much faster, but my house was hit by lightning
and the internet is down. I'm a few thousand miles away atm so won't be
able to sort it for a while.
Tests broken by _nature_.
* some simple server tests
* fmt
* logging in server test
* log server errors
* fix postgres uri
* postgres ports
* localhost again?
* Rebase fixes
---------
Co-authored-by: Ellie Huxtable <ellie@elliehuxtable.com>
Docker builds are failing on ARM, let's see if they work on AMD.
Might be worth waiting until we have less C that needs building
This reverts commit f3894ef9c0.
Not entirely sure how to test this workflow.
Why weren't ARM64 builds being made anyways? The workflow literally has cases to handle it!
Fixes https://github.com/ellie/atuin/issues/369
This PR updates the GH Action docker workflow to crossbuild the docker
image for ARM using QEMU.
This theoratically works but takes about for a clean build 2.5 hours,
which may or may not be tolerable for each commit on `main`.
A action run where the dependencies are already cached needs about 15
minutes with this PR.
* Add history filters, and hotkey toggle
Switch between different search modes to narrow down the history you
want - global search for all history, host for all history from your
current machine, session for the current shell session, and directory
for the current directory
The default can be configured via `filter_mode`
* Update docs
* Add context