This commit
- adds a configuration in which no step holds, replication cursors, etc. are created
- removes the send.step_holds.disable_incremental setting
- creates a new config option `replication` for active-side jobs
- adds the replication.protection.{initial,incremental} settings, each
of which can have values
- `guarantee_resumability`
- `guarantee_incremental`
- `guarantee_nothing`
(refer to docs/configuration/replication.rst for semantics)
The `replication` config from an active side is sent to both endpoint.Sender and endpoint.Receiver
for each replication step. Sender and Receiver then act accordingly.
For `guarantee_incremental`, we add the new `tentative-replication-cursor` abstraction.
The necessity for that abstraction is outlined in https://github.com/zrepl/zrepl/issues/340.
fixes https://github.com/zrepl/zrepl/issues/340
- drop HintMostRecentCommonAncestor rpc call
- it is wrong to put faith into the active side of the replication to always make that call
(we might not trust it, ref pull setup)
- clean up step holds + step bookmarks + replication cursor bookmarks on
send RPC instead
- this makes it symmetric with Receive RPC
- use a cache (endpoint.sendAbstractionsCache) to avoid the cost of
listing the on-disk endpoint abstractions state on every step
The "create" methods for endpoint abstractions (CreateReplicationCursor, HoldStep) are now fully
idempotent and return an Abstraction.
Notes about endpoint.sendAbstractionsCache:
- fills lazily from disk state on first `Get` operation
- fill from disk is generally only attempted once
- unless the `ListAbstractions` fails, in which case the fill from
disk is retried on next `Get` (the current `Get` will observe a
subset of the actual on-disk abstractions)
- the `Invalidate` method is called
- it is a global (zrepl process-wide) cache
fixes#316
package trace:
- introduce the concept of tasks and spans, tracked as linked list within ctx
- see package-level docs for an overview of the concepts
- **main feature 1**: unique stack of task and span IDs
- makes it easy to follow a series of log entries in concurrent code
- **main feature 2**: ability to produce a chrome://tracing-compatible trace file
- either via an env variable or a `zrepl pprof` subcommand
- this is not a CPU profile, we already have go pprof for that
- but it is very useful to visually inspect where the
replication / snapshotter / pruner spends its time
( fixes#307 )
usage in package daemon/logging:
- goal: every log entry should have a trace field with the ID stack from package trace
- make `logging.GetLogger(ctx, Subsys)` the authoritative `logger.Logger` factory function
- the context carries a linked list of injected fields which
`logging.GetLogger` adds to the logger it returns
- `logging.GetLogger` also uses package `trace` to get the
task-and-span-stack and injects it into the returned logger's fields
The motivation for this recatoring are based on two independent issues:
- @JMoVS found that the changes merged as part of #259 slowed his OS X
based installation down significantly.
Analysis of the zfs command logging introduced in #296 showed that
`zfs holds` took most of the execution time, and they pointed out
that not all of those `zfs holds` invocations were actually necessary.
I.e.: zrepl was inefficient about retrieving information from ZFS.
- @InsanePrawn found that failures on initial replication would lead
to step holds accumulating on the sending side, i.e. they would never
be cleaned up in the HintMostRecentCommonAncestor RPC handler.
That was because we only sent that RPC if there was a most recent
common ancestor detected during replication planning.
@InsanePrawn prototyped an implementation of a `zrepl zfs-abstractions release`
command to mitigate the situation.
As part of that development work and back-and-forth with @problame,
it became evident that the abstractions that #259 built on top of
zfs in package endpoint (step holds, replication cursor,
last-received-hold), were not well-represented for re-use in the
`zrepl zfs-abstractions release` subocommand prototype.
This commit refactors package endpoint to address both of these issues:
- endpoint abstractions now share an interface `Abstraction` that, among
other things, provides a uniform `Destroy()` method.
However, that method should not be destroyed directly but instead
the package-level `BatchDestroy` function should be used in order
to allow for a migration to zfs channel programs in the future.
- endpoint now has a query facitilty (`ListAbstractions`) which is
used to find on-disk
- step holds and bookmarks
- replication cursors (v1, v2)
- last-received-holds
By describing the query in a struct, we can centralized the retrieval
of information via the ZFS CLI and only have to be clever once.
We are "clever" in the following ways:
- When asking for hold-based abstractions, we only run `zfs holds` on
snapshot that have `userrefs` > 0
- To support this functionality, add field `UserRefs` to zfs.FilesystemVersion
and retrieve it anywhere we retrieve zfs.FilesystemVersion from ZFS.
- When asking only for bookmark-based abstractions, we only run
`zfs list -t bookmark`, not with snapshots.
- Currently unused (except for CLI) per-filesystem concurrent lookup
- Option to only include abstractions with CreateTXG in a specified range
- refactor `endpoint`'s various ZFS info retrieval methods to use
`ListAbstractions`
- rename the `zrepl holds list` command to `zrepl zfs-abstractions list`
- make `zrepl zfs-abstractions list` consume endpoint.ListAbstractions
- Add a `ListStale` method which, given a query template,
lists stale holds and bookmarks.
- it uses replication cursor has different modes
- the new `zrepl zfs-abstractions release-{all,stale}` commands can be used
to remove abstractions of package endpoint
- Adjust HintMostRecentCommonAncestor RPC for stale-holds cleanup:
- send it also if no most recent common ancestor exists between sender and receiver
- have the sender clean up its abstractions when it receives the RPC
with no most recent common ancestor, using `ListStale`
- Due to changed semantics, bump the protocol version.
- Adjust HintMostRecentCommonAncestor RPC for performance problems
encountered by @JMoVS
- by default, per (job,fs)-combination, only consider cleaning
step holds in the createtxg range
`[last replication cursor,conservatively-estimated-receive-side-version)`
- this behavior ensures resumability at cost proportional to the
time that replication was donw
- however, as explained in a comment, we might leak holds if
the zrepl daemon stops running
- that trade-off is acceptable because in the presumably rare
this might happen the user has two tools at their hand:
- Tool 1: run `zrepl zfs-abstractions release-stale`
- Tool 2: use env var `ZREPL_ENDPOINT_SENDER_HINT_MOST_RECENT_STEP_HOLD_CLEANUP_MODE`
to adjust the lower bound of the createtxg range (search for it in the code).
The env var can also be used to disable hold-cleanup on the
send-side entirely.
supersedes closes#293
supersedes closes#282fixes#280fixes#278
Additionaly, we fixed a couple of bugs:
- zfs: fix half-nil error reporting of dataset-does-not-exist for ZFSListChan and ZFSBookmark
- endpoint: Sender's `HintMostRecentCommonAncestor` handler would not
check whether access to the specified filesystem was allowed.
- **Resumable Send & Recv Support**
No knobs required, automatically used where supported.
- **Hold-Protected Send & Recv**
Automatic ZFS holds to ensure that we can always resume a replication step.
- **Encrypted Send & Recv Support** for OpenZFS native encryption.
Configurable at the job level, i.e., for all filesystems a job is responsible for.
- **Receive-side hold on last received dataset**
The counterpart to the replication cursor bookmark on the send-side.
Ensures that incremental replication will always be possible between a sender and receiver.
Design Doc
----------
`replication/design.md` doc describes how we use ZFS holds and bookmarks to ensure that a single replication step is always resumable.
The replication algorithm described in the design doc introduces the notion of job IDs (please read the details on this design doc).
We reuse the job names for job IDs and use `JobID` type to ensure that a job name can be embedded into hold tags, bookmark names, etc.
This might BREAK CONFIG on upgrade.
Protocol Version Bump
---------------------
This commit makes backwards-incompatible changes to the replication/pdu protobufs.
Thus, bump the version number used in the protocol handshake.
Replication Cursor Format Change
--------------------------------
The new replication cursor bookmark format is: `#zrepl_CURSOR_G_${this.GUID}_J_${jobid}`
Including the GUID enables transaction-safe moving-forward of the cursor.
Including the job id enables that multiple sending jobs can send the same filesystem without interfering.
The `zrepl migrate replication-cursor:v1-v2` subcommand can be used to safely destroy old-format cursors once zrepl has created new-format cursors.
Changes in This Commit
----------------------
- package zfs
- infrastructure for holds
- infrastructure for resume token decoding
- implement a variant of OpenZFS's `entity_namecheck` and use it for validation in new code
- ZFSSendArgs to specify a ZFS send operation
- validation code protects against malicious resume tokens by checking that the token encodes the same send parameters that the send-side would use if no resume token were available (i.e. same filesystem, `fromguid`, `toguid`)
- RecvOptions support for `recv -s` flag
- convert a bunch of ZFS operations to be idempotent
- achieved through more differentiated error message scraping / additional pre-/post-checks
- package replication/pdu
- add field for encryption to send request messages
- add fields for resume handling to send & recv request messages
- receive requests now contain `FilesystemVersion To` in addition to the filesystem into which the stream should be `recv`d into
- can use `zfs recv $root_fs/$client_id/path/to/dataset@${To.Name}`, which enables additional validation after recv (i.e. whether `To.Guid` matched what we received in the stream)
- used to set `last-received-hold`
- package replication/logic
- introduce `PlannerPolicy` struct, currently only used to configure whether encrypted sends should be requested from the sender
- integrate encryption and resume token support into `Step` struct
- package endpoint
- move the concepts that endpoint builds on top of ZFS to a single file `endpoint/endpoint_zfs.go`
- step-holds + step-bookmarks
- last-received-hold
- new replication cursor + old replication cursor compat code
- adjust `endpoint/endpoint.go` handlers for
- encryption
- resumability
- new replication cursor
- last-received-hold
- client subcommand `zrepl holds list`: list all holds and hold-like bookmarks that zrepl thinks belong to it
- client subcommand `zrepl migrate replication-cursor:v1-v2`
There's plenty of room for improvement here.
For example, detect if we're past the last step without size estimation
and compute the remaining sum of bytes to be replicated from there on.
ATM, the replication logic sends all dry-run requests in parallel,
which might overwhelm the ZFS pool on the sending side.
Since we use rpc/dataconn for dry sends, this also opens one TCP
connection per dry-run request.
Use a sempahore to limit the degree of concurrency where we know it is a
problem ATM.
As indicated by the comments, the cleaner solution would involve some
kind of 'resource exhaustion' error code.
refs #161
refs #164
We assumed that `zfs recv -F FS` would basically replace FS inplace, leaving its children untouched.
That is in fact not the case, it only works if `zfs send -R` is set, which we don't do.
Thus, implement the required functionality manually.
This solves a `zfs recv` error that would occur when a filesystem previously created as placeholder on the receiving side becomes a non-placeholder filesystem (likely due to config change on the sending side):
zfs send pool1/foo@1 | zfs recv -F pool1/bar
cannot receive new filesystem stream:
destination has snapshots (eg. pool1/bar)
must destroy them to overwrite it
* use pseudo-depdencies in build/build.go to convince dep
* update Travis, Dockerfile and Docs
* build.Dockerfile image now contains the Go build dependencies
* => faster builds
* bump pdu file after protoc update
fixes#106
* Remove explicity state machine code for all but replication.Replication
* Introduce explicit error types that satisfy interfaces which provide
sufficient information for replication.Replication to make intelligent
retry + queuing decisions
* Temporary()
* LocalToFS()
* Remove the queue and replace it with a simple array that we sort each
time (yay no generics :( )
An fsrep.Replication is either Ready, Retry or in a terminal state.
The queue prefers Ready over Retry:
Ready is sorted by nextStepDate to progress evenly..
Retry is sorted by error count, to de-prioritize filesystems that fail
often. This way we don't get stuck with individual filesystems
and lose other working filesystems to the watchdog.
fsrep.Replication no longer blocks in Retry state, we have
replication.WorkingWait for that.
ActiveSide.do() can only run sequentially, i.e. we cannot run
replication and pruning in parallel. Why?
* go-streamrpc only allows one active request at a time
(this is bad design and should be fixed at some point)
* replication and pruning are implemented independently, but work on the
same resources (snapshots)
A: pruning might destroy a snapshot that is planned to be replicated
B: replication might replicate snapshots that should be pruned
We do not have any resource management / locking for A and B, but we
have a use case where users don't want their machine fill up with
snapshots if replication does not work.
That means we _have_ to run the pruners.
A further complication is that we cannot just cancel the replication
context after a timeout and move on to the pruner: it could be initial
replication and we don't know how long it will take.
(And we don't have resumable send & recv yet).
With the previous commits, we can implement the watchdog using context
cancellation.
Note that the 'MadeProgress()' calls can only be placed right before
non-error state transition. Otherwise, we could end up in a live-lock.
- handle wakeups in Planning state
- fsrep.Replication yields immediately in RetryWait
- once the queue only contains fsrep.Replication in retryWait:
transition replication.Replication into WorkingWait state
- handle wakeups in WorkingWait state, too