zrepl/docs/usage.rst

151 lines
5.3 KiB
ReStructuredText
Raw Permalink Normal View History

.. _usage:
*****
Usage
*****
============
CLI Overview
============
.. NOTE::
The zrepl binary is self-documenting:
run ``zrepl help`` for an overview of the available subcommands or ``zrepl SUBCOMMAND --help`` for information on available flags, etc.
.. _cli-signal-wakeup:
.. list-table::
:widths: 30 70
:header-rows: 1
* - Subcommand
- Description
* - ``zrepl help``
- show subcommand overview
* - ``zrepl daemon``
- run the daemon, required for all zrepl functionality
* - ``zrepl status``
- show job activity, or with ``--raw`` for JSON output
* - ``zrepl stdinserver``
- see :ref:`transport-ssh+stdinserver`
* - ``zrepl signal wakeup JOB``
- manually trigger replication + pruning of JOB
* - ``zrepl signal reset JOB``
- manually abort current replication + pruning of JOB
* - ``zrepl configcheck``
- check if config can be parsed without errors
* - ``zrepl migrate``
- | perform on-disk state / ZFS property migrations
| (see :ref:`changelog <changelog>` for details)
* - ``zrepl zfs-abstraction``
endpoint: refactor, fix stale holds on initial replication failure, zfs-abstractions subcmd, more efficient ZFS queries The motivation for this recatoring are based on two independent issues: - @JMoVS found that the changes merged as part of #259 slowed his OS X based installation down significantly. Analysis of the zfs command logging introduced in #296 showed that `zfs holds` took most of the execution time, and they pointed out that not all of those `zfs holds` invocations were actually necessary. I.e.: zrepl was inefficient about retrieving information from ZFS. - @InsanePrawn found that failures on initial replication would lead to step holds accumulating on the sending side, i.e. they would never be cleaned up in the HintMostRecentCommonAncestor RPC handler. That was because we only sent that RPC if there was a most recent common ancestor detected during replication planning. @InsanePrawn prototyped an implementation of a `zrepl zfs-abstractions release` command to mitigate the situation. As part of that development work and back-and-forth with @problame, it became evident that the abstractions that #259 built on top of zfs in package endpoint (step holds, replication cursor, last-received-hold), were not well-represented for re-use in the `zrepl zfs-abstractions release` subocommand prototype. This commit refactors package endpoint to address both of these issues: - endpoint abstractions now share an interface `Abstraction` that, among other things, provides a uniform `Destroy()` method. However, that method should not be destroyed directly but instead the package-level `BatchDestroy` function should be used in order to allow for a migration to zfs channel programs in the future. - endpoint now has a query facitilty (`ListAbstractions`) which is used to find on-disk - step holds and bookmarks - replication cursors (v1, v2) - last-received-holds By describing the query in a struct, we can centralized the retrieval of information via the ZFS CLI and only have to be clever once. We are "clever" in the following ways: - When asking for hold-based abstractions, we only run `zfs holds` on snapshot that have `userrefs` > 0 - To support this functionality, add field `UserRefs` to zfs.FilesystemVersion and retrieve it anywhere we retrieve zfs.FilesystemVersion from ZFS. - When asking only for bookmark-based abstractions, we only run `zfs list -t bookmark`, not with snapshots. - Currently unused (except for CLI) per-filesystem concurrent lookup - Option to only include abstractions with CreateTXG in a specified range - refactor `endpoint`'s various ZFS info retrieval methods to use `ListAbstractions` - rename the `zrepl holds list` command to `zrepl zfs-abstractions list` - make `zrepl zfs-abstractions list` consume endpoint.ListAbstractions - Add a `ListStale` method which, given a query template, lists stale holds and bookmarks. - it uses replication cursor has different modes - the new `zrepl zfs-abstractions release-{all,stale}` commands can be used to remove abstractions of package endpoint - Adjust HintMostRecentCommonAncestor RPC for stale-holds cleanup: - send it also if no most recent common ancestor exists between sender and receiver - have the sender clean up its abstractions when it receives the RPC with no most recent common ancestor, using `ListStale` - Due to changed semantics, bump the protocol version. - Adjust HintMostRecentCommonAncestor RPC for performance problems encountered by @JMoVS - by default, per (job,fs)-combination, only consider cleaning step holds in the createtxg range `[last replication cursor,conservatively-estimated-receive-side-version)` - this behavior ensures resumability at cost proportional to the time that replication was donw - however, as explained in a comment, we might leak holds if the zrepl daemon stops running - that trade-off is acceptable because in the presumably rare this might happen the user has two tools at their hand: - Tool 1: run `zrepl zfs-abstractions release-stale` - Tool 2: use env var `ZREPL_ENDPOINT_SENDER_HINT_MOST_RECENT_STEP_HOLD_CLEANUP_MODE` to adjust the lower bound of the createtxg range (search for it in the code). The env var can also be used to disable hold-cleanup on the send-side entirely. supersedes closes #293 supersedes closes #282 fixes #280 fixes #278 Additionaly, we fixed a couple of bugs: - zfs: fix half-nil error reporting of dataset-does-not-exist for ZFSListChan and ZFSBookmark - endpoint: Sender's `HintMostRecentCommonAncestor` handler would not check whether access to the specified filesystem was allowed.
2020-03-26 23:43:17 +01:00
- list and remove zrepl's abstractions on top of ZFS, e.g. holds and step bookmarks (see :ref:`overview <replication-cursor-and-last-received-hold>` )
.. _usage-zrepl-daemon:
============
zrepl daemon
============
All actual work zrepl does is performed by a daemon process.
The daemon supports structured :ref:`logging <logging>` and provides :ref:`monitoring endpoints <monitoring>`.
When installing from a package, the package maintainer should have provided an init script / systemd.service file.
You should thus be able to start zrepl daemon using your init system.
Alternatively, or for running zrepl in the foreground, simply execute ``zrepl daemon``.
Note that you won't see much output with the :ref:`default logging configuration<logging-default-config>`:
.. ATTENTION::
Make sure to actually monitor the error level output of zrepl: some configuration errors will not make the daemon exit.
Example: if the daemon cannot create the :ref:`transport-ssh+stdinserver` sockets in the runtime directory,
it will emit an error message but not exit because other tasks such as periodic snapshots & pruning are of equal importance.
.. _usage-zrepl-daemon-restarting:
Restarting
~~~~~~~~~~
The daemon handles SIGINT and SIGTERM for graceful shutdown.
Graceful shutdown means at worst that a job will not be rescheduled for the next interval.
The daemon exits as soon as all jobs have reported shut down.
Systemd Unit File
~~~~~~~~~~~~~~~~~
A systemd service definition template is available in :repomasterlink:`dist/systemd`.
Note that some of the options only work on recent versions of systemd.
Any help & improvements are very welcome, see :issue:`145`.
============
Ops Runbooks
============
.. toctree::
usage/runbooks/migrating_sending_side_to_new_zpool.rst
2021-12-18 17:19:59 +01:00
.. _usage-platform-tests:
==============
Platform Tests
==============
Along with the main ``zrepl`` binary, we release the ``platformtest`` binaries.
The zrepl platform tests are an integration test suite that is complementary to the pure Go unit tests.
Any test that needs to interact with ZFS is a platform test.
The platform need to run as root.
For each test, we create a fresh dummy zpool backed by a file-based vdev.
The file path, and a root mountpoint for the dummy zpool, must be specified on the command line:
::
mkdir -p /tmp/zreplplatformtest
./platformtest \
-poolname 'zreplplatformtest' \ # <- name must contain zreplplatformtest
-imagepath /tmp/zreplplatformtest.img \ # <- zrepl will create the file
-mountpoint /tmp/zreplplatformtest # <- must exist
.. WARNING::
``platformtest`` will unconditionally overwrite the file at `imagepath`
and unconditionally ``zpool destroy $poolname``.
So, don't use a production poolname, and consider running the test in a VM.
It'll be a lot faster as well because the underlying operations, ``zfs list`` in particular, will be faster.
While the platformtests are running, there will be a log of log output.
After all tests have run, it prints a summary with a list of tests, grouped by result type (success, failure, skipped):
::
PASSING TESTS:
github.com/zrepl/zrepl/platformtest/tests.BatchDestroy
github.com/zrepl/zrepl/platformtest/tests.CreateReplicationCursor
github.com/zrepl/zrepl/platformtest/tests.GetNonexistent
github.com/zrepl/zrepl/platformtest/tests.HoldsWork
...
github.com/zrepl/zrepl/platformtest/tests.SendStreamNonEOFReadErrorHandling
github.com/zrepl/zrepl/platformtest/tests.UndestroyableSnapshotParsing
SKIPPED TESTS:
github.com/zrepl/zrepl/platformtest/tests.SendArgsValidationEncryptedSendOfUnencryptedDatasetForbidden__EncryptionSupported_false
FAILED TESTS: []
If there is a failure, or a skipped test that you believe should be passing, re-run the test suite, capture stderr & stdout to a text file, and create an issue on GitHub.
To run a specific test case, or a subset of tests matched by regex, use the ``-run REGEX`` command line flag.
To stop test execution at the first failing test, and prevent cleanup of the dummy zpool, use the ``-failure.stop-and-keep-pool`` flag.
To build the platformtests yourself, use ``make test-platform-bin``.
There's also the ``make test-platform`` target to run the platform tests with a default command line.