mirror of
https://github.com/zrepl/zrepl.git
synced 2024-11-22 16:34:32 +01:00
58c08c855f
- **Resumable Send & Recv Support** No knobs required, automatically used where supported. - **Hold-Protected Send & Recv** Automatic ZFS holds to ensure that we can always resume a replication step. - **Encrypted Send & Recv Support** for OpenZFS native encryption. Configurable at the job level, i.e., for all filesystems a job is responsible for. - **Receive-side hold on last received dataset** The counterpart to the replication cursor bookmark on the send-side. Ensures that incremental replication will always be possible between a sender and receiver. Design Doc ---------- `replication/design.md` doc describes how we use ZFS holds and bookmarks to ensure that a single replication step is always resumable. The replication algorithm described in the design doc introduces the notion of job IDs (please read the details on this design doc). We reuse the job names for job IDs and use `JobID` type to ensure that a job name can be embedded into hold tags, bookmark names, etc. This might BREAK CONFIG on upgrade. Protocol Version Bump --------------------- This commit makes backwards-incompatible changes to the replication/pdu protobufs. Thus, bump the version number used in the protocol handshake. Replication Cursor Format Change -------------------------------- The new replication cursor bookmark format is: `#zrepl_CURSOR_G_${this.GUID}_J_${jobid}` Including the GUID enables transaction-safe moving-forward of the cursor. Including the job id enables that multiple sending jobs can send the same filesystem without interfering. The `zrepl migrate replication-cursor:v1-v2` subcommand can be used to safely destroy old-format cursors once zrepl has created new-format cursors. Changes in This Commit ---------------------- - package zfs - infrastructure for holds - infrastructure for resume token decoding - implement a variant of OpenZFS's `entity_namecheck` and use it for validation in new code - ZFSSendArgs to specify a ZFS send operation - validation code protects against malicious resume tokens by checking that the token encodes the same send parameters that the send-side would use if no resume token were available (i.e. same filesystem, `fromguid`, `toguid`) - RecvOptions support for `recv -s` flag - convert a bunch of ZFS operations to be idempotent - achieved through more differentiated error message scraping / additional pre-/post-checks - package replication/pdu - add field for encryption to send request messages - add fields for resume handling to send & recv request messages - receive requests now contain `FilesystemVersion To` in addition to the filesystem into which the stream should be `recv`d into - can use `zfs recv $root_fs/$client_id/path/to/dataset@${To.Name}`, which enables additional validation after recv (i.e. whether `To.Guid` matched what we received in the stream) - used to set `last-received-hold` - package replication/logic - introduce `PlannerPolicy` struct, currently only used to configure whether encrypted sends should be requested from the sender - integrate encryption and resume token support into `Step` struct - package endpoint - move the concepts that endpoint builds on top of ZFS to a single file `endpoint/endpoint_zfs.go` - step-holds + step-bookmarks - last-received-hold - new replication cursor + old replication cursor compat code - adjust `endpoint/endpoint.go` handlers for - encryption - resumability - new replication cursor - last-received-hold - client subcommand `zrepl holds list`: list all holds and hold-like bookmarks that zrepl thinks belong to it - client subcommand `zrepl migrate replication-cursor:v1-v2`
178 lines
5.5 KiB
ReStructuredText
178 lines
5.5 KiB
ReStructuredText
.. include:: global.rst.inc
|
|
|
|
.. _tutorial:
|
|
|
|
Tutorial
|
|
========
|
|
|
|
|
|
This tutorial shows how zrepl can be used to implement a ZFS-based push backup.
|
|
We assume the following scenario:
|
|
|
|
* Production server ``prod`` with filesystems to back up:
|
|
|
|
* ``zroot/var/db``
|
|
* ``zroot/usr/home`` and all its child filesystems
|
|
* **except** ``zroot/usr/home/paranoid`` belonging to a user doing backups themselves
|
|
|
|
* Backup server ``backups`` with
|
|
|
|
* Filesystem ``storage/zrepl/sink/prod`` + children dedicated to backups of ``prod``
|
|
|
|
Our backup solution should fulfill the following requirements:
|
|
|
|
* Periodically snapshot the filesystems on ``prod`` *every 10 minutes*
|
|
* Incrementally replicate these snapshots to ``storage/zrepl/sink/prod/*`` on ``backups``
|
|
* Keep only very few snapshots on ``prod`` to save disk space
|
|
* Keep a fading history (24 hourly, 30 daily, 6 monthly) of snapshots on ``backups``
|
|
|
|
Analysis
|
|
--------
|
|
|
|
We can model this situation as two jobs:
|
|
|
|
* A **push job** on ``prod``
|
|
|
|
* Creates the snapshots
|
|
* Keeps a short history of local snapshots to enable incremental replication to ``backups``
|
|
* Connects to the ``zrepl daemon`` process on ``backups``
|
|
* Pushes snapshots ``backups``
|
|
* Prunes snapshots on ``backups`` after replication is complete
|
|
|
|
* A **sink job** on ``backups``
|
|
|
|
* Accepts connections & responds to requests from ``prod``
|
|
* Limits client ``prod`` access to filesystem sub-tree ``storage/zrepl/sink/prod``
|
|
|
|
Install zrepl
|
|
-------------
|
|
|
|
Follow the :ref:`OS-specific installation instructions <installation>` and come back here.
|
|
|
|
Generate TLS Certificates
|
|
-------------------------
|
|
|
|
We use the :ref:`TLS client authentication transport <transport-tcp+tlsclientauth>` to protect our data on the wire.
|
|
To get things going quickly, we skip setting up a CA and generate two self-signed certificates as described :ref:`here <transport-tcp+tlsclientauth-2machineopenssl>`.
|
|
For convenience, we generate the key pairs on our local machine and distribute them using ssh:
|
|
|
|
.. code-block:: bash
|
|
:emphasize-lines: 6,13
|
|
|
|
openssl req -x509 -sha256 -nodes \
|
|
-newkey rsa:4096 \
|
|
-days 365 \
|
|
-keyout backups.key \
|
|
-out backups.crt
|
|
# ... and use "backups" as Common Name (CN)
|
|
|
|
openssl req -x509 -sha256 -nodes \
|
|
-newkey rsa:4096 \
|
|
-days 365 \
|
|
-keyout prod.key \
|
|
-out prod.crt
|
|
# ... and use "prod" as Common Name (CN)
|
|
|
|
ssh root@backups "mkdir /etc/zrepl"
|
|
scp backups.key backups.crt prod.crt root@backups:/etc/zrepl
|
|
|
|
ssh root@prod "mkdir /etc/zrepl"
|
|
scp prod.key prod.crt backups.crt root@prod:/etc/zrepl
|
|
|
|
|
|
Configure server ``prod``
|
|
-------------------------
|
|
|
|
We define a **push job** named ``prod_to_backups`` in ``/etc/zrepl/zrepl.yml`` on host ``prod`` : ::
|
|
|
|
jobs:
|
|
- name: prod_to_backups
|
|
type: push
|
|
connect:
|
|
type: tls
|
|
address: "backups.example.com:8888"
|
|
ca: /etc/zrepl/backups.crt
|
|
cert: /etc/zrepl/prod.crt
|
|
key: /etc/zrepl/prod.key
|
|
server_cn: "backups"
|
|
filesystems: {
|
|
"zroot/var/db": true,
|
|
"zroot/usr/home<": true,
|
|
"zroot/usr/home/paranoid": false
|
|
}
|
|
snapshotting:
|
|
type: periodic
|
|
prefix: zrepl_
|
|
interval: 10m
|
|
pruning:
|
|
keep_sender:
|
|
- type: not_replicated
|
|
- type: last_n
|
|
count: 10
|
|
keep_receiver:
|
|
- type: grid
|
|
grid: 1x1h(keep=all) | 24x1h | 30x1d | 6x30d
|
|
regex: "^zrepl_"
|
|
|
|
.. _tutorial-configure-prod:
|
|
|
|
Configure server ``backups``
|
|
----------------------------
|
|
|
|
We define a corresponding **sink job** named ``sink`` in ``/etc/zrepl/zrepl.yml`` on host ``backups`` : ::
|
|
|
|
jobs:
|
|
- name: sink
|
|
type: sink
|
|
serve:
|
|
type: tls
|
|
listen: ":8888"
|
|
ca: "/etc/zrepl/prod.crt"
|
|
cert: "/etc/zrepl/backups.crt"
|
|
key: "/etc/zrepl/backups.key"
|
|
client_cns:
|
|
- "prod"
|
|
root_fs: "storage/zrepl/sink"
|
|
|
|
|
|
Apply Configuration Changes
|
|
---------------------------
|
|
|
|
We use ``zrepl configcheck`` to catch any configuration errors: no output indicates that everything is fine.
|
|
If that is the case, restart the zrepl daemon on **both** ``prod`` and ``backups`` using ``service zrepl restart`` or ``systemctl restart zrepl``.
|
|
|
|
|
|
Watch it Work
|
|
-------------
|
|
|
|
Run ``zrepl status`` on ``prod`` to monitor the replication and pruning activity.
|
|
To re-trigger replication (snapshots are separate!), use ``zrepl signal wakeup prod_to_backups`` on ``prod``.
|
|
|
|
If you like tmux, here is a handy script that works on FreeBSD: ::
|
|
|
|
pkg install gnu-watch tmux
|
|
tmux new -s zrepl -d
|
|
tmux split-window -t zrepl "tail -f /var/log/messages"
|
|
tmux split-window -t zrepl "gnu-watch 'zfs list -t snapshot -o name,creation -s creation | grep zrepl_'"
|
|
tmux split-window -t zrepl "zrepl status"
|
|
tmux select-layout -t zrepl tiled
|
|
tmux attach -t zrepl
|
|
|
|
The Linux equivalent might look like this: ::
|
|
|
|
# make sure tmux is installed & let's assume you use systemd + journald
|
|
tmux new -s zrepl -d
|
|
tmux split-window -t zrepl "journalctl -f -u zrepl.service"
|
|
tmux split-window -t zrepl "watch 'zfs list -t snapshot -o name,creation -s creation | grep zrepl_'"
|
|
tmux split-window -t zrepl "zrepl status"
|
|
tmux select-layout -t zrepl tiled
|
|
tmux attach -t zrepl
|
|
|
|
Summary
|
|
-------
|
|
|
|
Congratulations, you have a working push backup. Where to go next?
|
|
|
|
* Read more about :ref:`configuration format, options & job types <configuration_toc>`
|
|
* Configure :ref:`logging <logging>` \& :ref:`monitoring <monitoring>`.
|