zrepl/daemon
Christian Schwarz aeb87ffbcf daemon/job/active: push mode: awful hack for handling of concurrent snapshots + stale remote operation
We have the problem that there are legitimate use cases where a user
does not want their machine to fill up with snapshots, even if it means
unreplicated must be destroyed.  This can be expressed by *not*
configuring the keep rule `not_replicated` for the snapshot-creating
side.  This commit only addresses push mode because we don't support
pruning in the source job. We adivse users in the docs to use push mode
if they have above use case, so this is fine - at least for 0.1.

Ideally, the replication.Replication would communicate to the pruner
which snapshots are currently part of the replication plan, and then
we'd need some conflict resolution to determine whether it's more
important to destroy the snapshots or to replicate them (destroy should
win?).

However, we don't have the infrastructure for this yet (we could parse
the replication report, but that's just ugly).  And we want to get 0.1
out, so showtime for a dirty hack:

We start replication, and ideally, replication and pruning is done
before new snapshot have been taken. If so: great. However, what happens
if snapshots have been taken and we are not done with replication and /
or pruning?

* If replicatoin is making progress according to its state, let it run.
This covers the *important* situation of initial replication, where
replication may easily take longer than a single snapshotting interval.

* If replication is in an error state, cancel it through context
cancellation.
    * As with the pruner below, the main problem here is that
      status output will only contain "context cancelled" after the
      cancellation, instead of showing the reason why it was cancelled.
      Not nice, but oh well, the logs provide enough detail for this
      niche situation...

* If we are past replication, we're still pruning

* Leave the local (send-side) pruning alone.
Again, we only implement this hack for push, so we know sender is
local, and it will only fail hard, not retry.

* If the remote (receiver-side) pruner is in an error state, cancel it
through context cancellation.

* Otherwise, let it run.

Note that every time we "let it run", we tolerate a temporary excess of
snapshots, but given sufficiently aggressive timeouts and the assumption
that the snapshot interval is much greater than the timeouts, this is
not a significant problem in practice.
2018-10-12 22:47:06 +02:00
..
filters Implement periodic snapshotting. 2018-09-04 16:43:55 -07:00
job daemon/job/active: push mode: awful hack for handling of concurrent snapshots + stale remote operation 2018-10-12 22:47:06 +02:00
logging move serve and connecter into transports package 2018-10-11 21:21:46 +02:00
nethelpers WIP rewrite the daemon 2018-08-27 22:22:44 +02:00
pruner use enumer generate tool for state strings 2018-10-12 22:10:49 +02:00
snapper snapshotting: support 'periodic' and 'manual' mode 2018-10-11 15:59:23 +02:00
streamrpcconfig update to streamrpc 0.4 & adjust config (not breaking) 2018-09-23 20:28:30 +02:00
transport implement transport protocol handshake (even before streamrpc handshake) 2018-10-11 21:21:46 +02:00
control.go move wakeup subcommand into signal subcommand and add reset subcommand 2018-10-12 20:50:56 +02:00
daemon.go move wakeup subcommand into signal subcommand and add reset subcommand 2018-10-12 20:50:56 +02:00
main.go WIP rewrite the daemon 2018-08-27 22:22:44 +02:00
pprof.go privatize pprofServer 2018-08-27 19:13:35 +02:00
prometheus.go status: infra for reporting jobs instead of just replication.Report 2018-09-23 21:11:33 +02:00