One-stop ZFS backup & replication solution
Go to file
Christian Schwarz aeb87ffbcf daemon/job/active: push mode: awful hack for handling of concurrent snapshots + stale remote operation
We have the problem that there are legitimate use cases where a user
does not want their machine to fill up with snapshots, even if it means
unreplicated must be destroyed.  This can be expressed by *not*
configuring the keep rule `not_replicated` for the snapshot-creating
side.  This commit only addresses push mode because we don't support
pruning in the source job. We adivse users in the docs to use push mode
if they have above use case, so this is fine - at least for 0.1.

Ideally, the replication.Replication would communicate to the pruner
which snapshots are currently part of the replication plan, and then
we'd need some conflict resolution to determine whether it's more
important to destroy the snapshots or to replicate them (destroy should
win?).

However, we don't have the infrastructure for this yet (we could parse
the replication report, but that's just ugly).  And we want to get 0.1
out, so showtime for a dirty hack:

We start replication, and ideally, replication and pruning is done
before new snapshot have been taken. If so: great. However, what happens
if snapshots have been taken and we are not done with replication and /
or pruning?

* If replicatoin is making progress according to its state, let it run.
This covers the *important* situation of initial replication, where
replication may easily take longer than a single snapshotting interval.

* If replication is in an error state, cancel it through context
cancellation.
    * As with the pruner below, the main problem here is that
      status output will only contain "context cancelled" after the
      cancellation, instead of showing the reason why it was cancelled.
      Not nice, but oh well, the logs provide enough detail for this
      niche situation...

* If we are past replication, we're still pruning

* Leave the local (send-side) pruning alone.
Again, we only implement this hack for push, so we know sender is
local, and it will only fail hard, not retry.

* If the remote (receiver-side) pruner is in an error state, cancel it
through context cancellation.

* Otherwise, let it run.

Note that every time we "let it run", we tolerate a temporary excess of
snapshots, but given sufficiently aggressive timeouts and the assumption
that the snapshot interval is much greater than the timeouts, this is
not a significant problem in practice.
2018-10-12 22:47:06 +02:00
client client/status: improve hiding of data if current state makes it obsolete 2018-10-12 22:47:06 +02:00
config rename root_dataset to root_fs for receiving-side jobs 2018-10-11 18:03:18 +02:00
daemon daemon/job/active: push mode: awful hack for handling of concurrent snapshots + stale remote operation 2018-10-12 22:47:06 +02:00
docs docs: reflect changes in replication_rewrite branch 2018-10-11 18:03:18 +02:00
endpoint endpoint: support remote ReplicationCursor endpoint 2018-09-24 12:36:10 +02:00
logger colorized stdout logger if stdout is tty 2018-08-30 13:33:28 +02:00
pruning pruning: implement 'grid' keep rule 2018-09-24 17:33:16 +02:00
replication use enumer generate tool for state strings 2018-10-12 22:10:49 +02:00
tlsconf Multi-client servers + bring back stdinserver support 2018-09-04 16:43:55 -07:00
util refactor: socketpair into utils package (useful elsewhere) 2018-10-11 21:17:43 +02:00
version wip floocode backup 2018-08-27 15:22:32 +02:00
zfs bring back prometheus metrics, with new metrics for replication state machine 2018-09-07 22:22:34 -07:00
.gitignore Rudimentary Makefile specifying requirements for a release 2017-09-30 16:40:39 +02:00
.gitmodules docs: move hugo docs to old directory 2017-11-11 23:25:12 +01:00
build.Dockerfile build: pin protoc version and update protobuf + regenerate 2018-08-26 14:35:18 +02:00
Gopkg.lock update to streamrpc 0.4 & adjust config (not breaking) 2018-09-23 20:28:30 +02:00
Gopkg.toml update to streamrpc 0.4 & adjust config (not breaking) 2018-09-23 20:28:30 +02:00
lazy.sh use enumer generate tool for state strings 2018-10-12 22:10:49 +02:00
LICENSE license: change attribution 2017-05-03 18:28:04 +02:00
main.go move wakeup subcommand into signal subcommand and add reset subcommand 2018-10-12 20:50:56 +02:00
Makefile pruning: implement 'grid' keep rule 2018-09-24 17:33:16 +02:00
README.md update README to reflect restructuring 2018-08-22 10:15:27 +02:00

zrepl

zrepl is a ZFS filesystem backup & replication solution written in Go.

User Documentation

User Documentation can be found at zrepl.github.io.

Bug Reports

  1. If the issue is reproducible, enable debug logging, reproduce and capture the log.
  2. Open an issue on GitHub, with logs pasted as GitHub gists / inline.

Feature Requests

  1. Does you feature request require default values / some kind of configuration? If so, think of an expressive configuration example.
  2. Think of at least one use case that generalizes from your concrete application.
  3. Open an issue on GitHub with example conf & use case attached.

The above does not apply if you already implemented everything. Check out the Coding Workflow section below for details.

Package Maintainer Information

  • Follow the steps in docs/installation.rst -> Compiling from Source and read the Makefile / shell scripts used in this process.
  • Make sure your distro is compatible with the paths in docs/installation.rst.
  • Ship a default config that adheres to your distro's hier and logging system.
  • Ship a service manager file and please try to upstream it to this repository.
  • Use make release ZREPL_VERSION='mydistro-1.2.3_1'
    • Your distro's name and any versioning supplemental to zrepl's (e.g. package revision) should be in this string
  • Make sure you are informed about new zrepl versions, e.g. by subscribing to GitHub's release RSS feed.

Developer Documentation

First, use ./lazy.sh devsetup to install build dependencies and read docs/installation.rst -> Compiling from Source.

Overall Architecture

The application architecture is documented as part of the user docs in the Implementation section (docs/content/impl). Make sure to develop an understanding how zrepl is typically used by studying the user docs first.

Project Structure

├── cmd
│   ├── endpoint            # implementations of endpoints for package replication
│   ├── sampleconf          # example configuration
├── docs                    # sphinx-based documentation
│   ├── **/*.rst            # documentation in reStructuredText
│   ├── sphinxconf
│   │   └── conf.py         # sphinx config (see commit 445a280 why its not in docs/)
│   ├── requirements.txt    # pip3 requirements to build documentation
│   ├── publish.sh          # shell script for automated rendering & deploy to zrepl.github.io repo
│   ├── public_git          # checkout of zrepl.github.io managed by above shell script
├── logger                  # logger package used by zrepl
├── replication             # replication functionality
├── rpc                     # rpc protocol implementation
├── util
└── zfs                     # ZFS wrappers, filesystemm diffing

Coding Workflow

  • Open an issue when starting to hack on a new feature
  • Commits should reference the issue they are related to
  • Docs improvements not documenting new features do not require an issue.

Breaking Changes

Backward-incompatible changes must be documented in the git commit message and are listed in docs/changelog.rst.

  • Config-breaking changes must contain a line BREAK CONFIG in the commit message
  • Other breaking changes must contain a line BREAK in the commit message

Glossary & Naming Inconsistencies

In ZFS, dataset refers to the objects filesystem, ZVOL and snapshot.
However, we need a word for filesystem & ZVOL but not a snapshot, bookmark, etc.

Toward the user, the following terminology is used:

  • filesystem: a ZFS filesystem or a ZVOL
  • filesystem version: a ZFS snapshot or a bookmark

Sadly, the zrepl implementation is inconsistent in its use of these words: variables and types are often named dataset when they in fact refer to a filesystem.

There will not be a big refactoring (an attempt was made, but it's destroying too much history without much gain).

However, new contributions & patches should fix naming without further notice in the commit message.