zrepl/docs/configuration/overview.rst

301 lines
18 KiB
ReStructuredText
Raw Normal View History

Overview & Terminology
======================
All work zrepl does is performed by the zrepl daemon which is configured in a single YAML configuration file loaded on startup.
The following paths are considered:
* If set, the location specified via the global ``--config`` flag
* ``/etc/zrepl/zrepl.yml``
* ``/usr/local/etc/zrepl/zrepl.yml``
The ``zrepl configcheck`` subcommand can be used to validate the configuration.
The command will output nothing and exit with zero status code if the configuration is valid.
The error messages vary in quality and usefulness: please report confusing config errors to the tracking :issue:`155`.
Full example configs such as in the :ref:`quick-start guides <quickstart-toc>` or the :sampleconf:`/` directory might also be helpful.
However, copy-pasting examples is no substitute for reading documentation!
Config File Structure
---------------------
.. code-block:: yaml
global: ...
jobs:
- name: backup
type: push
- ...
zrepl is configured using a single YAML configuration file with two main sections: ``global`` and ``jobs``.
The ``global`` section is filled with sensible defaults and is covered later in this chapter.
The ``jobs`` section is a list of jobs which we are going to explain now.
.. _job-overview:
Jobs \& How They Work Together
------------------------------
A *job* is the unit of activity tracked by the zrepl daemon.
The ``type`` of a job determines its role in a replication setup and in snapshot management.
Jobs are identified by their ``name``, both in log files and the ``zrepl status`` command.
.. NOTE::
The job name is persisted in several places on disk and thus :issue:`cannot be changed easily<327>`.
Replication always happens between a pair of jobs: one is the **active side**, and one the **passive side**.
The active side connects to the passive side using a :ref:`transport <transport>` and starts executing the replication logic.
The passive side responds to requests from the active side after checking its permissions.
The following table shows how different job types can be combined to achieve **both push and pull mode setups**.
Note that snapshot-creation denoted by "(snap)" is orthogonal to whether a job is active or passive.
+-----------------------+--------------+----------------------------------+------------------------------------------------------------------------------------+
| Setup name | active side | passive side | use case |
+=======================+==============+==================================+====================================================================================+
| Push mode | ``push`` | ``sink`` | * Laptop backup |
| | (snap) | | * NAS behind NAT to offsite |
+-----------------------+--------------+----------------------------------+------------------------------------------------------------------------------------+
| Pull mode | ``pull`` | ``source`` | * Central backup-server for many nodes |
| | | (snap) | * Remote server to NAS behind NAT |
+-----------------------+--------------+----------------------------------+------------------------------------------------------------------------------------+
| Local replication | | ``push`` + ``sink`` in one config | * Backup to :ref:`locally attached disk <quickstart-backup-to-external-disk>` |
| | | with :ref:`local transport <transport-local>` | * Backup FreeBSD boot pool |
+-----------------------+--------------+----------------------------------+------------------------------------------------------------------------------------+
| Snap & prune-only | ``snap`` | N/A | * | Snapshots & pruning but no replication |
| | (snap) | | | required |
| | | | * Workaround for :ref:`source-side pruning <prune-workaround-source-side-pruning>` |
+-----------------------+--------------+----------------------------------+------------------------------------------------------------------------------------+
How the Active Side Works
-------------------------
The active side (:ref:`push <job-push>` and :ref:`pull <job-pull>` job) executes the replication and pruning logic:
* Wakeup because of finished snapshotting (``push`` job) or pull interval ticker (``pull`` job).
* Connect to the corresponding passive side using a :ref:`transport <transport>` and instantiate an RPC client.
* Replicate data from the sending to the receiving side (see below).
* Prune on sender & receiver.
.. TIP::
The progress of the active side can be watched live using the ``zrepl status`` subcommand.
.. _overview-passive-side--client-identity:
How the Passive Side Works
--------------------------
The passive side (:ref:`sink <job-sink>` and :ref:`source <job-source>`) waits for connections from the corresponding active side,
using the transport listener type specified in the ``serve`` field of the job configuration.
When a client connects, the transport listener performS listener-specific access control (cert validation, IP ACLs, etc)
and determines the *client identity*.
The passive side job then uses this client identity as follows:
* The ``sink`` job maps requests from different client identities to their respective sub-filesystem tree ``root_fs/${client_identity}``.
* The ``source`` might, in the future, embed the client identity in :ref:`zrepl's ZFS abstraction names <zrepl-zfs-abstractions>` in order to support multi-host replication.
.. TIP::
The implementation of the ``sink`` job requires that the connecting client identities be a valid ZFS filesystem name components.
new features: {resumable,encrypted,hold-protected} send-recv, last-received-hold - **Resumable Send & Recv Support** No knobs required, automatically used where supported. - **Hold-Protected Send & Recv** Automatic ZFS holds to ensure that we can always resume a replication step. - **Encrypted Send & Recv Support** for OpenZFS native encryption. Configurable at the job level, i.e., for all filesystems a job is responsible for. - **Receive-side hold on last received dataset** The counterpart to the replication cursor bookmark on the send-side. Ensures that incremental replication will always be possible between a sender and receiver. Design Doc ---------- `replication/design.md` doc describes how we use ZFS holds and bookmarks to ensure that a single replication step is always resumable. The replication algorithm described in the design doc introduces the notion of job IDs (please read the details on this design doc). We reuse the job names for job IDs and use `JobID` type to ensure that a job name can be embedded into hold tags, bookmark names, etc. This might BREAK CONFIG on upgrade. Protocol Version Bump --------------------- This commit makes backwards-incompatible changes to the replication/pdu protobufs. Thus, bump the version number used in the protocol handshake. Replication Cursor Format Change -------------------------------- The new replication cursor bookmark format is: `#zrepl_CURSOR_G_${this.GUID}_J_${jobid}` Including the GUID enables transaction-safe moving-forward of the cursor. Including the job id enables that multiple sending jobs can send the same filesystem without interfering. The `zrepl migrate replication-cursor:v1-v2` subcommand can be used to safely destroy old-format cursors once zrepl has created new-format cursors. Changes in This Commit ---------------------- - package zfs - infrastructure for holds - infrastructure for resume token decoding - implement a variant of OpenZFS's `entity_namecheck` and use it for validation in new code - ZFSSendArgs to specify a ZFS send operation - validation code protects against malicious resume tokens by checking that the token encodes the same send parameters that the send-side would use if no resume token were available (i.e. same filesystem, `fromguid`, `toguid`) - RecvOptions support for `recv -s` flag - convert a bunch of ZFS operations to be idempotent - achieved through more differentiated error message scraping / additional pre-/post-checks - package replication/pdu - add field for encryption to send request messages - add fields for resume handling to send & recv request messages - receive requests now contain `FilesystemVersion To` in addition to the filesystem into which the stream should be `recv`d into - can use `zfs recv $root_fs/$client_id/path/to/dataset@${To.Name}`, which enables additional validation after recv (i.e. whether `To.Guid` matched what we received in the stream) - used to set `last-received-hold` - package replication/logic - introduce `PlannerPolicy` struct, currently only used to configure whether encrypted sends should be requested from the sender - integrate encryption and resume token support into `Step` struct - package endpoint - move the concepts that endpoint builds on top of ZFS to a single file `endpoint/endpoint_zfs.go` - step-holds + step-bookmarks - last-received-hold - new replication cursor + old replication cursor compat code - adjust `endpoint/endpoint.go` handlers for - encryption - resumability - new replication cursor - last-received-hold - client subcommand `zrepl holds list`: list all holds and hold-like bookmarks that zrepl thinks belong to it - client subcommand `zrepl migrate replication-cursor:v1-v2`
2019-09-11 17:19:17 +02:00
.. _overview-how-replication-works:
How Replication Works
---------------------
One of the major design goals of the replication module is to avoid any duplication of the nontrivial logic.
As such, the code works on abstract senders and receiver **endpoints**, where typically one will be implemented by a local program object and the other is an RPC client instance.
Regardless of push- or pull-style setup, the logic executes on the active side, i.e. in the ``push`` or ``pull`` job.
The following high-level steps take place during replication and can be monitored using the ``zrepl status`` subcommand:
* Plan the replication:
* Compare sender and receiver filesystem snapshots
* Build the **replication plan**
* Per filesystem, compute a diff between sender and receiver snapshots
* Build a list of **replication steps**
* If possible, use incremental and resumable sends
* Otherwise, use full send of most recent snapshot on sender
* Retry on errors that are likely temporary (i.e. network failures).
* Give up on filesystems where a permanent error was received over RPC.
* Execute the plan
* Perform replication steps in the following order:
Among all filesystems with pending replication steps, pick the filesystem whose next replication step's snapshot is the oldest.
* Create placeholder filesystems on the receiving side to mirror the dataset paths on the sender to ``root_fs/${client_identity}``.
endpoint: refactor, fix stale holds on initial replication failure, zfs-abstractions subcmd, more efficient ZFS queries The motivation for this recatoring are based on two independent issues: - @JMoVS found that the changes merged as part of #259 slowed his OS X based installation down significantly. Analysis of the zfs command logging introduced in #296 showed that `zfs holds` took most of the execution time, and they pointed out that not all of those `zfs holds` invocations were actually necessary. I.e.: zrepl was inefficient about retrieving information from ZFS. - @InsanePrawn found that failures on initial replication would lead to step holds accumulating on the sending side, i.e. they would never be cleaned up in the HintMostRecentCommonAncestor RPC handler. That was because we only sent that RPC if there was a most recent common ancestor detected during replication planning. @InsanePrawn prototyped an implementation of a `zrepl zfs-abstractions release` command to mitigate the situation. As part of that development work and back-and-forth with @problame, it became evident that the abstractions that #259 built on top of zfs in package endpoint (step holds, replication cursor, last-received-hold), were not well-represented for re-use in the `zrepl zfs-abstractions release` subocommand prototype. This commit refactors package endpoint to address both of these issues: - endpoint abstractions now share an interface `Abstraction` that, among other things, provides a uniform `Destroy()` method. However, that method should not be destroyed directly but instead the package-level `BatchDestroy` function should be used in order to allow for a migration to zfs channel programs in the future. - endpoint now has a query facitilty (`ListAbstractions`) which is used to find on-disk - step holds and bookmarks - replication cursors (v1, v2) - last-received-holds By describing the query in a struct, we can centralized the retrieval of information via the ZFS CLI and only have to be clever once. We are "clever" in the following ways: - When asking for hold-based abstractions, we only run `zfs holds` on snapshot that have `userrefs` > 0 - To support this functionality, add field `UserRefs` to zfs.FilesystemVersion and retrieve it anywhere we retrieve zfs.FilesystemVersion from ZFS. - When asking only for bookmark-based abstractions, we only run `zfs list -t bookmark`, not with snapshots. - Currently unused (except for CLI) per-filesystem concurrent lookup - Option to only include abstractions with CreateTXG in a specified range - refactor `endpoint`'s various ZFS info retrieval methods to use `ListAbstractions` - rename the `zrepl holds list` command to `zrepl zfs-abstractions list` - make `zrepl zfs-abstractions list` consume endpoint.ListAbstractions - Add a `ListStale` method which, given a query template, lists stale holds and bookmarks. - it uses replication cursor has different modes - the new `zrepl zfs-abstractions release-{all,stale}` commands can be used to remove abstractions of package endpoint - Adjust HintMostRecentCommonAncestor RPC for stale-holds cleanup: - send it also if no most recent common ancestor exists between sender and receiver - have the sender clean up its abstractions when it receives the RPC with no most recent common ancestor, using `ListStale` - Due to changed semantics, bump the protocol version. - Adjust HintMostRecentCommonAncestor RPC for performance problems encountered by @JMoVS - by default, per (job,fs)-combination, only consider cleaning step holds in the createtxg range `[last replication cursor,conservatively-estimated-receive-side-version)` - this behavior ensures resumability at cost proportional to the time that replication was donw - however, as explained in a comment, we might leak holds if the zrepl daemon stops running - that trade-off is acceptable because in the presumably rare this might happen the user has two tools at their hand: - Tool 1: run `zrepl zfs-abstractions release-stale` - Tool 2: use env var `ZREPL_ENDPOINT_SENDER_HINT_MOST_RECENT_STEP_HOLD_CLEANUP_MODE` to adjust the lower bound of the createtxg range (search for it in the code). The env var can also be used to disable hold-cleanup on the send-side entirely. supersedes closes #293 supersedes closes #282 fixes #280 fixes #278 Additionaly, we fixed a couple of bugs: - zfs: fix half-nil error reporting of dataset-does-not-exist for ZFSListChan and ZFSBookmark - endpoint: Sender's `HintMostRecentCommonAncestor` handler would not check whether access to the specified filesystem was allowed.
2020-03-26 23:43:17 +01:00
* Acquire send-side *step-holds* on the step's `from` and `to` snapshots.
new features: {resumable,encrypted,hold-protected} send-recv, last-received-hold - **Resumable Send & Recv Support** No knobs required, automatically used where supported. - **Hold-Protected Send & Recv** Automatic ZFS holds to ensure that we can always resume a replication step. - **Encrypted Send & Recv Support** for OpenZFS native encryption. Configurable at the job level, i.e., for all filesystems a job is responsible for. - **Receive-side hold on last received dataset** The counterpart to the replication cursor bookmark on the send-side. Ensures that incremental replication will always be possible between a sender and receiver. Design Doc ---------- `replication/design.md` doc describes how we use ZFS holds and bookmarks to ensure that a single replication step is always resumable. The replication algorithm described in the design doc introduces the notion of job IDs (please read the details on this design doc). We reuse the job names for job IDs and use `JobID` type to ensure that a job name can be embedded into hold tags, bookmark names, etc. This might BREAK CONFIG on upgrade. Protocol Version Bump --------------------- This commit makes backwards-incompatible changes to the replication/pdu protobufs. Thus, bump the version number used in the protocol handshake. Replication Cursor Format Change -------------------------------- The new replication cursor bookmark format is: `#zrepl_CURSOR_G_${this.GUID}_J_${jobid}` Including the GUID enables transaction-safe moving-forward of the cursor. Including the job id enables that multiple sending jobs can send the same filesystem without interfering. The `zrepl migrate replication-cursor:v1-v2` subcommand can be used to safely destroy old-format cursors once zrepl has created new-format cursors. Changes in This Commit ---------------------- - package zfs - infrastructure for holds - infrastructure for resume token decoding - implement a variant of OpenZFS's `entity_namecheck` and use it for validation in new code - ZFSSendArgs to specify a ZFS send operation - validation code protects against malicious resume tokens by checking that the token encodes the same send parameters that the send-side would use if no resume token were available (i.e. same filesystem, `fromguid`, `toguid`) - RecvOptions support for `recv -s` flag - convert a bunch of ZFS operations to be idempotent - achieved through more differentiated error message scraping / additional pre-/post-checks - package replication/pdu - add field for encryption to send request messages - add fields for resume handling to send & recv request messages - receive requests now contain `FilesystemVersion To` in addition to the filesystem into which the stream should be `recv`d into - can use `zfs recv $root_fs/$client_id/path/to/dataset@${To.Name}`, which enables additional validation after recv (i.e. whether `To.Guid` matched what we received in the stream) - used to set `last-received-hold` - package replication/logic - introduce `PlannerPolicy` struct, currently only used to configure whether encrypted sends should be requested from the sender - integrate encryption and resume token support into `Step` struct - package endpoint - move the concepts that endpoint builds on top of ZFS to a single file `endpoint/endpoint_zfs.go` - step-holds + step-bookmarks - last-received-hold - new replication cursor + old replication cursor compat code - adjust `endpoint/endpoint.go` handlers for - encryption - resumability - new replication cursor - last-received-hold - client subcommand `zrepl holds list`: list all holds and hold-like bookmarks that zrepl thinks belong to it - client subcommand `zrepl migrate replication-cursor:v1-v2`
2019-09-11 17:19:17 +02:00
* Perform the replication step.
* Move the **replication cursor** bookmark on the sending side (see below).
* Move the **last-received-hold** on the receiving side (see below).
* Release the send-side step-holds.
The idea behind the execution order of replication steps is that if the sender snapshots all filesystems simultaneously at fixed intervals, the receiver will have all filesystems snapshotted at time ``T1`` before the first snapshot at ``T2 = T1 + $interval`` is replicated.
ZFS Background Knowledge
^^^^^^^^^^^^^^^^^^^^^^^^
This section gives some background knowledge about ZFS features that zrepl uses to provide guarantees for a replication filesystem.
Specifically, zrepl guarantees by default that **incremental replication is always possible and that started replication steps can always be resumed if they are interrupted.**
**ZFS Send Modes & Bookmarks**
ZFS supports full sends (``zfs send fs@to``) and incremental sends (``zfs send -i @from fs@to``).
Full sends are used to create a new filesystem on the receiver with the send-side state of ``fs@to``.
Incremental sends only transfer the delta between ``@from`` and ``@to``.
Incremental sends require that ``@from`` be present on the receiving side when receiving the incremental stream.
Incremental sends can also use a ZFS bookmark as *from* on the sending side (``zfs send -i #bm_from fs@to``), where ``#bm_from`` was created using ``zfs bookmark fs@from fs#bm_from``.
The receiving side must always have the actual snapshot ``@from``, regardless of whether the sending side uses ``@from`` or a bookmark of it.
.. _zfs-background-knowledge-plain-vs-raw-sends:
**Plain and raw sends**
By default, ``zfs send`` sends the most generic, backwards-compatible data stream format (so-called 'plain send').
If the sent uses newer features, e.g. compression or encryption, ``zfs send`` has to un-do these operations on the fly to produce the plain send stream.
If the receiver uses newer features (e.g. compression or encryption inherited from the parent FS), it applies the necessary transformations again on the fly during ``zfs recv``.
Flags such as ``-e``, ``-c`` and ``-L`` tell ZFS to produce a send stream that is closer to how the data is stored on disk.
Sending with those flags removes computational overhead from sender and receiver.
However, the receiver will not apply certain transformations, e.g., it will not compress with the receive-side ``compression`` algorithm.
The ``-w`` (``--raw``) flag produces a send stream that is as *raw* as possible.
For unencrypted datasets, its current effect is the same as ``-Lce``.
Encrypted datasets can only be sent plain (unencrypted) or raw (encrypted) using the ``-w`` flag.
**Resumable Send & Recv**
The ``-s`` flag for ``zfs recv`` tells zfs to save the partially received send stream in case it is interrupted.
To resume the replication, the receiving side filesystem's ``receive_resume_token`` must be passed to a new ``zfs send -t <value> | zfs recv`` command.
A full send can only be resumed if ``@to`` still exists.
An incremental send can only be resumed if ``@to`` still exists *and* either ``@from`` still exists *or* a bookmark ``#fbm`` of ``@from`` still exists.
**ZFS Holds**
ZFS holds prevent a snapshot from being deleted through ``zfs destroy``, letting the destroy fail with a ``datset is busy`` error.
Holds are created and referred to by a *tag*. They can be thought of as a named, persistent lock on the snapshot.
.. _zrepl-zfs-abstractions:
ZFS Abstractions Managed By zrepl
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
With the background knowledge from the previous paragraph, we now summarize the different on-disk ZFS objects that zrepl manages to provide its functionality.
.. _replication-placeholder-property:
**Placeholder filesystems** on the receiving side are regular ZFS filesystems with the ZFS property ``zrepl:placeholder=on``.
Placeholders allow the receiving side to mirror the sender's ZFS dataset hierarchy without replicating every filesystem at every intermediary dataset path component.
Consider the following example: ``S/H/J`` shall be replicated to ``R/sink/job/S/H/J``, but neither ``S/H`` nor ``S`` shall be replicated.
ZFS requires the existence of ``R/sink/job/S`` and ``R/sink/job/S/H`` in order to receive into ``R/sink/job/S/H/J``.
Thus, zrepl creates the parent filesystems as placeholders on the receiving side.
If at some point ``S/H`` and ``S`` shall be replicated, the receiving side invalidates the placeholder flag automatically.
The ``zrepl test placeholder`` command can be used to check whether a filesystem is a placeholder.
.. _replication-cursor-and-last-received-hold:
The **replication cursor** bookmark and **last-received-hold** are managed by zrepl to ensure that future replications can always be done incrementally.
The replication cursor is a send-side bookmark of the most recent successfully replicated snapshot,
and the last-received-hold is a hold of that snapshot on the receiving side.
Both are moved atomically after the receiving side has confirmed that a replication step is complete.
The replication cursor has the format ``#zrepl_CUSOR_G_<GUID>_J_<JOBNAME>``.
The last-received-hold tag has the format ``zrepl_last_received_J_<JOBNAME>``.
Encoding the job name in the names ensures that multiple sending jobs can replicate the same filesystem to different receivers without interference.
.. _tentative-replication-cursor-bookmarks:
2020-12-17 12:00:29 +01:00
**Tentative replication cursor bookmarks** are short-lived bookmarks that protect the atomic moving-forward of the replication cursor and last-received-hold (see :issue:`this issue <340>`).
They are only necessary if step holds are not used as per the :ref:`replication.protection <replication-option-protection>` setting.
The tentative replication cursor has the format ``#zrepl_CUSORTENTATIVE_G_<GUID>_J_<JOBNAME>``.
The ``zrepl zfs-abstraction list`` command provides a listing of all bookmarks and holds managed by zrepl.
.. _step-holds:
**Step holds** are zfs holds managed by zrepl to ensure that a replication step can always be resumed if it is interrupted, e.g., due to network outage.
zrepl creates step holds before it attempts a replication step and releases them after the receiver confirms that the replication step is complete.
For an initial replication ``full @initial_snap``, zrepl puts a zfs hold on ``@initial_snap``.
For an incremental send ``@from -> @to``, zrepl puts a zfs hold on both ``@from`` and ``@to``.
Note that ``@from`` is not strictly necessary for resumability -- a bookmark on the sending side would be sufficient --, but size-estimation in currently used OpenZFS versions only works if ``@from`` is a snapshot.
The hold tag has the format ``zrepl_STEP_J_<JOBNAME>``.
A job only ever has one active send per filesystem.
Thus, there are never more than two step holds for a given pair of ``(job,filesystem)``.
**Step bookmarks** are zrepl's equivalent for holds on bookmarks (ZFS does not support putting holds on bookmarks).
They are intended for a situation where a replication step uses a bookmark ``#bm`` as incremental ``from`` where ``#bm`` is not managed by zrepl.
To ensure resumability, zrepl copies ``#bm`` to step bookmark ``#zrepl_STEP_G_<GUID>_J_<JOBNAME>``.
If the replication is interrupted and ``#bm`` is deleted by the user, the step bookmark remains as an incremental source for the resumable send.
Note that zrepl does not yet support creating step bookmarks because the `corresponding ZFS feature for copying bookmarks <https://github.com/openzfs/zfs/pull/9571>`_ is not yet widely available .
Subscribe to zrepl :issue:`326` for details.
The ``zrepl zfs-abstraction list`` command provides a listing of all bookmarks and holds managed by zrepl.
new features: {resumable,encrypted,hold-protected} send-recv, last-received-hold - **Resumable Send & Recv Support** No knobs required, automatically used where supported. - **Hold-Protected Send & Recv** Automatic ZFS holds to ensure that we can always resume a replication step. - **Encrypted Send & Recv Support** for OpenZFS native encryption. Configurable at the job level, i.e., for all filesystems a job is responsible for. - **Receive-side hold on last received dataset** The counterpart to the replication cursor bookmark on the send-side. Ensures that incremental replication will always be possible between a sender and receiver. Design Doc ---------- `replication/design.md` doc describes how we use ZFS holds and bookmarks to ensure that a single replication step is always resumable. The replication algorithm described in the design doc introduces the notion of job IDs (please read the details on this design doc). We reuse the job names for job IDs and use `JobID` type to ensure that a job name can be embedded into hold tags, bookmark names, etc. This might BREAK CONFIG on upgrade. Protocol Version Bump --------------------- This commit makes backwards-incompatible changes to the replication/pdu protobufs. Thus, bump the version number used in the protocol handshake. Replication Cursor Format Change -------------------------------- The new replication cursor bookmark format is: `#zrepl_CURSOR_G_${this.GUID}_J_${jobid}` Including the GUID enables transaction-safe moving-forward of the cursor. Including the job id enables that multiple sending jobs can send the same filesystem without interfering. The `zrepl migrate replication-cursor:v1-v2` subcommand can be used to safely destroy old-format cursors once zrepl has created new-format cursors. Changes in This Commit ---------------------- - package zfs - infrastructure for holds - infrastructure for resume token decoding - implement a variant of OpenZFS's `entity_namecheck` and use it for validation in new code - ZFSSendArgs to specify a ZFS send operation - validation code protects against malicious resume tokens by checking that the token encodes the same send parameters that the send-side would use if no resume token were available (i.e. same filesystem, `fromguid`, `toguid`) - RecvOptions support for `recv -s` flag - convert a bunch of ZFS operations to be idempotent - achieved through more differentiated error message scraping / additional pre-/post-checks - package replication/pdu - add field for encryption to send request messages - add fields for resume handling to send & recv request messages - receive requests now contain `FilesystemVersion To` in addition to the filesystem into which the stream should be `recv`d into - can use `zfs recv $root_fs/$client_id/path/to/dataset@${To.Name}`, which enables additional validation after recv (i.e. whether `To.Guid` matched what we received in the stream) - used to set `last-received-hold` - package replication/logic - introduce `PlannerPolicy` struct, currently only used to configure whether encrypted sends should be requested from the sender - integrate encryption and resume token support into `Step` struct - package endpoint - move the concepts that endpoint builds on top of ZFS to a single file `endpoint/endpoint_zfs.go` - step-holds + step-bookmarks - last-received-hold - new replication cursor + old replication cursor compat code - adjust `endpoint/endpoint.go` handlers for - encryption - resumability - new replication cursor - last-received-hold - client subcommand `zrepl holds list`: list all holds and hold-like bookmarks that zrepl thinks belong to it - client subcommand `zrepl migrate replication-cursor:v1-v2`
2019-09-11 17:19:17 +02:00
.. NOTE::
More details can be found in the design document :repomasterlink:`replication/design.md`.
Limitations
^^^^^^^^^^^
.. ATTENTION::
Currently, zrepl does not replicate filesystem properties.
When receiving a filesystem, it is never mounted (`-u` flag) and `mountpoint=none` is set.
This is temporary and being worked on :issue:`24`.
.. _jobs-multiple-jobs:
Multiple Jobs & More than 2 Machines
------------------------------------
The quick-start guides focus on simple setups with a single sender and a single receiver.
This section documents considerations for more complex setups.
.. ATTENTION::
Before you continue, make sure you have a working understanding of :ref:`how zrepl works <overview-how-replication-works>`
and :ref:`what zrepl does to ensure <zrepl-zfs-abstractions>` that replication between sender and receiver is always
possible without conflicts.
This will help you understand why certain kinds of multi-machine setups do not (yet) work.
.. NOTE::
If you can't find your desired configuration, have questions or would like to see improvements to multi-job setups, please `open an issue on GitHub <https://github.com/zrepl/zrepl/issues/new>`_.
Multiple Jobs on one Machine
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
As a general rule, multiple jobs configured on one machine **must operate on disjoint sets of filesystems**.
Otherwise, concurrently running jobs might interfere when operating on the same filesystem.
On your setup, ensure that
* all ``filesystems`` filter specifications are disjoint
* no ``root_fs`` is a prefix or equal to another ``root_fs``
2020-12-17 12:00:29 +01:00
* no ``filesystems`` filter matches any ``root_fs``
**Exceptions to the rule**:
* A ``snap`` and ``push`` job on the same machine can match the same ``filesystems``.
To avoid interference, only one of the jobs should be pruning snapshots on the sender, the other one should keep all snapshots.
Since the jobs won't coordinate, errors in the log are to be expected, but :ref:`zrepl's ZFS abstractions <zrepl-zfs-abstractions>` ensure that ``push`` and ``sink`` can always replicate incrementally.
This scenario is detailed in one of the :ref:`quick-start guides <quickstart-backup-to-external-disk>`.
More Than 2 Machines
^^^^^^^^^^^^^^^^^^^^
This section might be relevant to users who wish to *fan-in* (N machines replicate to 1) or *fan-out* (replicate 1 machine to N machines).
**Working setups**:
* N ``push`` identities, 1 ``sink`` (as long as the different push jobs have a different :ref:`client identity <overview-passive-side--client-identity>`)
* ``sink`` constrains each client to a disjoint sub-tree of the sink-side dataset hierarchy ``${root_fs}/${client_identity}``.
Therefore, the different clients cannot interfere.
**Setups that do not work**:
* N ``pull`` identities, 1 ``source`` job. Tracking :issue:`380`.