2017-11-10 12:48:41 +01:00
.. _prune:
2017-11-10 13:28:05 +01:00
Pruning Policies
2017-11-09 20:33:09 +01:00
================
2018-10-11 17:46:26 +02:00
In zrepl, *pruning* means *destroying snapshots* .
2019-09-17 23:54:11 +02:00
Pruning must happen on both sides of a replication or the systems would inevitably run out of disk space at some point.
2017-11-09 20:33:09 +01:00
2018-10-11 17:46:26 +02:00
Typically, the requirements to temporal resolution and maximum retention time differ per side.
For example, when using zrepl to back up a busy database server, you will want high temporal resolution (snapshots every 10 min) for the last 24h in case of administrative disasters, but cannot afford to store them for much longer because you might have high turnover volume in the database.
On the receiving side, you may have more disk space available, or need to comply with other backup retention policies.
2017-11-09 20:33:09 +01:00
2019-01-22 16:46:34 +01:00
zrepl uses a set of **keep rules** per sending and receiving side to determine which snapshots shall be kept per filesystem.
2018-10-11 17:46:26 +02:00
**A snapshot that is not kept by any rule is destroyed.**
The keep rules are **evaluated on the active side** (:ref: `push <job-push>` or :ref: `pull job <job-pull>` ) of the replication setup, for both active and passive side, after replication completed or was determined to have failed permanently.
2017-11-09 20:33:09 +01:00
2018-11-21 16:59:46 +01:00
2018-10-11 17:46:26 +02:00
Example Configuration:
::
jobs:
- type: push
name: ...
connect: ...
filesystems: {
"<": true,
"tmp": false
}
snapshotting:
type: periodic
prefix: zrepl_
interval: 10m
pruning:
keep_sender:
- type: not_replicated
# make sure manually created snapshots by the administrator are kept
- type: regex
regex: "^manual_.*"
- type: grid
grid: 1x1h(keep=all) | 24x1h | 14x1d
regex: "^zrepl_.*"
keep_receiver:
- type: grid
grid: 1x1h(keep=all) | 24x1h | 35x1d | 6x30d
regex: "^zrepl_.*"
# manually created snapshots will be kept forever on receiver
2017-11-10 13:28:05 +01:00
2018-10-13 18:29:40 +02:00
.. DANGER ::
You might have **existing snapshots** of filesystems affected by pruning which you want to keep, i.e. not be destroyed by zrepl.
Make sure to actually add the necessary `` regex `` keep rules on both sides, like with `` manual `` in the example above.
2017-12-29 22:53:33 +01:00
2018-10-11 17:46:26 +02:00
.. _prune-keep-not-replicated:
Policy `` not_replicated ``
-------------------------
2017-11-09 20:33:09 +01:00
::
2018-10-11 17:46:26 +02:00
jobs:
- type: push
pruning:
keep_sender:
- type: not_replicated
...
`` not_replicated `` keeps all snapshots that have not been replicated to the receiving side.
It only makes sense to specify this rule on a sender (source or push job).
new features: {resumable,encrypted,hold-protected} send-recv, last-received-hold
- **Resumable Send & Recv Support**
No knobs required, automatically used where supported.
- **Hold-Protected Send & Recv**
Automatic ZFS holds to ensure that we can always resume a replication step.
- **Encrypted Send & Recv Support** for OpenZFS native encryption.
Configurable at the job level, i.e., for all filesystems a job is responsible for.
- **Receive-side hold on last received dataset**
The counterpart to the replication cursor bookmark on the send-side.
Ensures that incremental replication will always be possible between a sender and receiver.
Design Doc
----------
`replication/design.md` doc describes how we use ZFS holds and bookmarks to ensure that a single replication step is always resumable.
The replication algorithm described in the design doc introduces the notion of job IDs (please read the details on this design doc).
We reuse the job names for job IDs and use `JobID` type to ensure that a job name can be embedded into hold tags, bookmark names, etc.
This might BREAK CONFIG on upgrade.
Protocol Version Bump
---------------------
This commit makes backwards-incompatible changes to the replication/pdu protobufs.
Thus, bump the version number used in the protocol handshake.
Replication Cursor Format Change
--------------------------------
The new replication cursor bookmark format is: `#zrepl_CURSOR_G_${this.GUID}_J_${jobid}`
Including the GUID enables transaction-safe moving-forward of the cursor.
Including the job id enables that multiple sending jobs can send the same filesystem without interfering.
The `zrepl migrate replication-cursor:v1-v2` subcommand can be used to safely destroy old-format cursors once zrepl has created new-format cursors.
Changes in This Commit
----------------------
- package zfs
- infrastructure for holds
- infrastructure for resume token decoding
- implement a variant of OpenZFS's `entity_namecheck` and use it for validation in new code
- ZFSSendArgs to specify a ZFS send operation
- validation code protects against malicious resume tokens by checking that the token encodes the same send parameters that the send-side would use if no resume token were available (i.e. same filesystem, `fromguid`, `toguid`)
- RecvOptions support for `recv -s` flag
- convert a bunch of ZFS operations to be idempotent
- achieved through more differentiated error message scraping / additional pre-/post-checks
- package replication/pdu
- add field for encryption to send request messages
- add fields for resume handling to send & recv request messages
- receive requests now contain `FilesystemVersion To` in addition to the filesystem into which the stream should be `recv`d into
- can use `zfs recv $root_fs/$client_id/path/to/dataset@${To.Name}`, which enables additional validation after recv (i.e. whether `To.Guid` matched what we received in the stream)
- used to set `last-received-hold`
- package replication/logic
- introduce `PlannerPolicy` struct, currently only used to configure whether encrypted sends should be requested from the sender
- integrate encryption and resume token support into `Step` struct
- package endpoint
- move the concepts that endpoint builds on top of ZFS to a single file `endpoint/endpoint_zfs.go`
- step-holds + step-bookmarks
- last-received-hold
- new replication cursor + old replication cursor compat code
- adjust `endpoint/endpoint.go` handlers for
- encryption
- resumability
- new replication cursor
- last-received-hold
- client subcommand `zrepl holds list`: list all holds and hold-like bookmarks that zrepl thinks belong to it
- client subcommand `zrepl migrate replication-cursor:v1-v2`
2019-09-11 17:19:17 +02:00
The state required to evaluate this rule is stored in the :ref: `replication cursor bookmark <replication-cursor-and-last-received-hold>` on the sending side.
2018-10-11 17:46:26 +02:00
.. _prune-keep-retention-grid:
2017-11-09 20:33:09 +01:00
2018-10-11 17:46:26 +02:00
Policy `` grid ``
---------------
2018-02-17 20:48:31 +01:00
2018-10-11 17:46:26 +02:00
::
jobs:
- type: pull
pruning:
keep_receiver:
- type: grid
regex: "^zrepl_.*"
grid: 1x1h(keep=all) | 24x1h | 35x1d | 6x30d
│ │
└─ one hour interval
│
└─ 24 adjacent one-hour intervals
...
2018-02-17 20:48:31 +01:00
2017-11-09 20:33:09 +01:00
The retention grid can be thought of as a time-based sieve:
2017-11-10 13:28:05 +01:00
The `` grid `` field specifies a list of adjacent time intervals:
the left edge of the leftmost (first) interval is the `` creation `` date of the youngest snapshot.
2017-11-09 20:33:09 +01:00
All intervals to its right describe time intervals further in the past.
Each interval carries a maximum number of snapshots to keep.
2018-09-10 12:50:52 +02:00
It is specified via `` (keep=N) `` , where `` N `` is either `` all `` (all snapshots are kept) or a positive integer.
2018-10-11 17:46:26 +02:00
The default value is **keep=1** .
2018-02-17 20:48:31 +01:00
2017-11-09 20:33:09 +01:00
The following procedure happens during pruning:
2018-10-11 17:46:26 +02:00
#. The list of snapshots is filtered by the regular expression in `` regex `` .
Only snapshots names that match the regex are considered for this rule, all others are not affected.
#. The filtered list of snapshots is sorted by `` creation ``
2017-11-10 13:28:05 +01:00
#. The left edge of the first interval is aligned to the `` creation `` date of the youngest snapshot
#. A list of buckets is created, one for each interval
#. The list of snapshots is split up into the buckets.
#. For each bucket
2017-11-09 20:33:09 +01:00
2017-11-10 13:28:05 +01:00
#. the contained snapshot list is sorted by creation.
#. snapshots from the list, oldest first, are destroyed until the specified `` keep `` count is reached.
#. all remaining snapshots on the list are kept.
2017-11-09 20:33:09 +01:00
2018-10-11 17:46:26 +02:00
.. _prune-keep-last-n:
Policy `` last_n ``
-----------------
::
jobs:
- type: push
pruning:
keep_receiver:
- type: last_n
count: 10
...
`` last_n `` keeps the last `` count `` snapshots (last = youngest = most recent creation date).
.. _prune-keep-regex:
Policy `` regex ``
----------------
::
jobs:
- type: push
pruning:
keep_receiver:
2018-11-16 11:32:24 +01:00
# keep all snapshots with prefix zrepl_ or manual_
2018-10-11 17:46:26 +02:00
- type: regex
regex: "^(zrepl|manual)_.*"
2018-11-16 11:32:24 +01:00
- type: push
snapshotting:
prefix: zrepl_
pruning:
keep_sender:
# keep all snapshots that were not created by zrepl
- type: regex
negate: true
regex: "^zrepl_.*"
2018-10-11 17:46:26 +02:00
`` regex `` keeps all snapshots whose names are matched by the regular expressionin `` regex `` .
Like all other regular expression fields in prune policies, zrepl uses Go's `regexp.Regexp <https://golang.org/pkg/regexp/#Compile> `_ Perl-compatible regular expressions (`Syntax <https://golang.org/pkg/regexp/syntax> `_ ).
2018-11-16 11:32:24 +01:00
The optional `negate` boolean field inverts the semantics: Use it if you want to keep all snapshots that *do not* match the given regex.
2017-11-09 20:33:09 +01:00
2019-03-17 21:29:09 +01:00
.. _prune-workaround-source-side-pruning:
2019-01-22 16:46:34 +01:00
2019-03-15 21:54:46 +01:00
Source-side snapshot pruning
----------------------------
2019-01-22 16:46:34 +01:00
2019-03-15 21:54:46 +01:00
A :ref: `source jobs<job-source>` takes snapshots on the system it runs on.
The corresponding :ref: `pull job <job-pull>` on the replication target connects to the source job and replicates the snapshots.
Afterwards, the pull job coordinates pruning on both sender (the source job side) and receiver (the pull job side).
2017-11-09 20:33:09 +01:00
2019-03-15 21:54:46 +01:00
There is no built-in way to define and execute pruning on the source side independently of the pull side.
The source job will continue taking snapshots which will not be pruned until the pull side connects.
This means that **extended replication downtime will fill up the source's zpool with snapshots** .
2019-01-22 16:46:34 +01:00
2019-03-15 21:54:46 +01:00
If the above is a conceivable situation for you, consider using :ref: `push mode <job-push>` , where pruning happens on the same side where snapshots are taken.
2019-01-22 16:46:34 +01:00
2019-03-15 21:54:46 +01:00
Workaround using `` snap `` job
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As a workaround (see GitHub :issue: `102` for development progress), a pruning-only :ref: `snap job <job-snap>` can be defined on the source side:
The snap job is in charge of snapshot creation & destruction, whereas the source job's role is reduced to just serving snapshots.
However, since, jobs are run independently, it is possible that the snap job will prune snapshots that are queued for replication / destruction by the remote pull job that connects to the source job.
2019-03-17 20:54:47 +01:00
Symptoms of such race conditions are spurious replication and destroy errors.
2019-03-15 21:54:46 +01:00
Example configuration:
::
# source side
jobs:
- type: snap
snapshotting:
type: periodic
pruning:
keep:
# source side pruning rules go here
...
- type: source
snapshotting:
type: manual
root_fs: ...
2019-03-17 20:54:47 +01:00
2019-03-15 21:54:46 +01:00
# pull side
jobs:
- type: pull
pruning:
keep_sender:
# let the source-side snap job do the pruning
- type: regex
regex: ".*"
...
keep_receiver:
# feel free to prune on the pull side as desired
...