Commit Graph

121 Commits

Author SHA1 Message Date
Goran Mekic
bc5e1ede04
metric to detect filesystems rules that don't match any local dataset (#653)
This PR adds a Prometheus counter called
`zrepl_zfs_list_unmatched_user_specified_dataset_count`.
Monitor for increases of the counter to detect filesystem filter rules that
have no effect because they don't match any local filesystem.

An example use case for this is the following story:
1. Someone sets up zrepl with `filesystems` filter for `zroot/pg14<`.
2. During the upgrade to Postgres 15, they rename the dataset to `zroot/pg15`,
   but forget to update the zrepl `filesystems` filter.
3. zrepl will not snapshot / replicate the `zroot/pg15<` datasets.

Since `filesystems` rules are always evaluated on the side that has the datasets,
we can smuggle this functionality into the `zfs` module's `ZFSList` function that
is used by all jobs with a `filesystems` filter.

Dashboard changes:
- histogram with increase in $__interval, one row per job
- table with increase in $__range
- explainer text box, so, people know what the previous two are about
We had to re-arrange some panels, hence the Git diff isn't great.

closes https://github.com/zrepl/zrepl/pull/653

Co-authored-by: Christian Schwarz <me@cschwarz.com>
Co-authored-by: Goran Mekić <meka@tilda.center>
2023-05-02 22:13:52 +02:00
Christian Schwarz
a4cea1b4f3 go1.19: zfs.SendStream.Close() after EOF would return context cancellation error
Before upgrading to Go 1.19, these platform tests would sproadically
fail due to the reason outlined in the comment

  github.com/zrepl/zrepl/platformtest/tests.SendStreamMultipleCloseAfterEOF
  github.com/zrepl/zrepl/platformtest/tests.SendStreamCloseAfterEOFRead
2022-10-27 00:19:06 +02:00
Christian Schwarz
a6aa610165 run go1.19 gofmt and make adjustments as needed
(Go 1.19 expanded doc comment syntax)
2022-10-24 22:22:41 +02:00
Christian Schwarz
6c87bdb9fb go1.19: switch to new nolint directive that is compatible with Go 1.19 gofmt 2022-10-24 22:22:11 +02:00
Christian Schwarz
2d8c3692ec rework resume token validation to allow resuming from raw sends of unencrypted datasets
Before this change, resuming from an unencrypted dataset with
send.raw=true specified wouldn't work with zrepl due to overly
restrictive resume token checking.

An initial PR to fix this was made in https://github.com/zrepl/zrepl/pull/503
but it didn't address the core of the problem.
The core of the problem was that zrepl assumed that if a resume token
contained `rawok=true, compressok=true`, the resulting send would be
encrypted. But if the sender dataset was unencrypted, such a resume would
actually result in an unencrypted send.
Which could be totally legitimate but zrepl failed to recognize that.

BACKGROUND
==========

The following snippets of OpenZFS code are insightful regarding how the
various ${X}ok values in the resume token are handled:

- 6c3c5fcfbe/module/zfs/dmu_send.c (L1947-L2012)
- 6c3c5fcfbe/module/zfs/dmu_recv.c (L877-L891)
- https://github.com/openzfs/zfs/blob/6c3c5fc/lib/libzfs/libzfs_sendrecv.c#L1663-L1672

Basically, some zfs send flags make the DMU send code set some DMU send
stream featureflags, although it's not a pure mapping, i.e, which DMU
send stream flags are used depends somewhat on the dataset (e.g., is it
encrypted or not, or, does it use zstd or not).

Then, the receiver looks at some (but not all) feature flags and maps
them to ${X}ok dataset zap attributes.

These are funnelled back to the sender 1:1 through the resume_token.

And the sender turns them into lzc flags.

As an example, let's look at zfs send --raw.
if the sender requests a raw send on an unencrypted dataset, the send
stream (and hence the resume token) will not have the raw stream
featureflag set, and hence the resume token will not have the rawok
field set. Instead, it will have compressok, embedok, and depending
on whether large blocks are present in the dataset, largeblockok set.

WHAT'S ZREPL'S ROLE IN THIS?
============================

zrepl provides a virtual encrypted sendflag that is like `raw`,
but further ensures that we only send encrypted datasets.

For any other resume token stuff, it shoudn't do any checking,
because it's a futile effort to keep up with ZFS send/recv features
that are orthogonal to encryption.

CHANGES MADE IN THIS COMMIT
===========================

- Rip out a bunch of needless checking that zrepl would do during
  planning. These checks were there to give better error messages,
  but actually, the error messages created by the endpoint.Sender.Send
  RPC upon send args validation failure are good enough.
- Add platformtests to validate all combinations of
  (Unencrypted/Encrypted FS) x (send.encrypted = true | false) x (send.raw = true | false)
  for cases both non-resuming and resuming send.

Additional manual testing done:
1. With zrepl 0.5, setup with unencrypted dataset, send.raw=true specified, no send.encrypted specified.
2. Observe that regular non-resuming send works, but resuming doesn't work.
3. Upgrade zrepl to this change.
4. Observe that both regular and resuming send works.

closes https://github.com/zrepl/zrepl/pull/613
2022-09-25 17:32:02 +02:00
3nprob
e4112d888c add ZREPL_DESTROY_MAX_BATCH_SIZE env var to control max batch destroy size
fixes #508
closes https://github.com/zrepl/zrepl/pull/604
2022-06-30 09:22:26 +02:00
Christian Schwarz
fb6a9be954 fix encrypt-on-receive with placeholders
fixes https://github.com/zrepl/zrepl/issues/504

Problem:
  plain send + recv with root_fs encrypted + placeholders causes plain recvs
  whereas user would expect encrypt-on-recv
Reason:
  We create placeholder filesytems with -o encryption=off.
  Thus, children received below those placeholders won't inherit
  encryption of root_fs.
Fix:
  We'll have three values for `recv.placeholders.encryption: unspecified (default) | off | inherit`.
  When we create a placeholder, we will fail the operation if  `recv.placeholders.encryption = unspecified`.
  The exception is if the placeholder filesystem is to encode the client identity ($root_fs/$client_identity) in a pull job.
  Those are created in `inherit` mode if the config field is `unspecified` so that users who don't need
  placeholders are not bothered by these details.

Future Work:
  Automatically warn existing users of encrypt-on-recv about the problem
  if they are affected.
  The problem that I hit during implementation of this is that the
  `encryption` prop's `source` doesn't quite behave like other props:
  `source` is `default` for `encryption=off` and `-` when `encryption=on`.
  Hence, we can't use `source` to distinguish the following 2x2 cases:
  (1) placeholder created with explicit -o encryption=off
  (2) placeholder created without specifying -o encryption
  with
  (A) an encrypted parent at creation time
  (B) an unencrypted parent at creation time
2021-12-18 15:12:47 +01:00
Christian Schwarz
a6dbda1ea8 go1.17: run goimports to supports the new //go:build lines 2021-10-09 16:51:08 +02:00
Christian Schwarz
4f9b63aa09 rework size estimation & dry sends
- use control connection (gRPC)
- use uint64 everywhere => fixes https://github.com/zrepl/zrepl/issues/463
- [BREAK] bump protocol version

closes https://github.com/zrepl/zrepl/pull/518
fixes https://github.com/zrepl/zrepl/issues/463
2021-10-09 15:43:27 +02:00
Christian Schwarz
a8e92971d0 zfs: rewrite SendStream, fix bug in Close() on FreeBSD, add platformtests
This commit was motivated by https://github.com/zrepl/zrepl/issues/495
where, on FreeBSD with OpenZFS 2.0, a SendStream.Close() call might wait indefinitely for `zfs send` to exit.
The reason is that, due to the refactoring done for redacted send & recv
(30af21b025),
the `dump_bytes` function, which writes to the pipe, executes in a separate thread (synctask taskq) iff not `HAVE_LARGE_STACKS`.
The `zfs send` process/thread waits for that taskq thread using an uninterruptible primitive.
So when we SIGKILL `zfs send`, that signal doesn't reach the right thread to interrupt the pipe write.

Theoretically this affects both Linux and FreeBSD, but most Linux users `HAVE_LARGE_STACKS` and since https://github.com/penzfs/zfs/pull/12350/files OpenZFS on FreeBSD `HAVE_LARGE_STACKS` as well.
However, at least until FreeBSD 13.1, possibly for the entire 13 lifecycle, we're going to have to live with that oddity.

Measures taken in this commit:
- Report the behavior as an upstream bug https://github.com/openzfs/zfs/issues/12500
- Change SendStream code so that it closes zrepl's read-end of the pipe (see comment in code)
- Clean up and make explicit SendStream's state handling
- Write extensive platformtests for SendStream
    - They pass on my Linux install and on FreeBSD 12
    - FreeBSD 13 still needs testing.

fixes https://github.com/zrepl/zrepl/issues/495
2021-09-19 20:11:31 +02:00
Lukas Schauer
ee2336a24b zfs: pipe size: default to value of /proc/sys/fs/pipe-max-siz
Addition by @problame: move getPipeCapacityHint() into platform-specific
code. This has the added benefit of not recognizing the envvar as an
envconst on platform that do not support resizing pipes.  => won't show
up in (zrepl status --raw).Global.Envconst

fixes #424
cloes #449
2021-03-25 22:24:50 +01:00
Christian Schwarz
5c6d69a69c zfs: PropertySource: set type to uint32 so that enumer-generated code is platform-independent
make zrepl-bin test-platform-bin vet lint GOOS=freebsd   GOARCH=386
make[2]: Entering directory '/src'
GO111MODULE=on go build -mod=readonly  -ldflags "-X github.com/zrepl/zrepl/version.zreplVersion=v0.3.1-20-g07f2bff" -o "artifacts/zrepl-freebsd-386"
zfs/propertysource_enumer.go:41:9: constant 18446744073709551615 overflows PropertySource
zfs/propertysource_enumer.go:48:66: constant 18446744073709551615 overflows PropertySource
zfs/propertysource_enumer.go:57:23: constant 18446744073709551615 overflows PropertySource

fixes #429
2021-03-14 22:32:45 +01:00
InsanePrawn
393fc10a69 [#285] support setting zfs send / recv flags in the config (send: -wLcepbS, recv: -ox)
Co-authored-by: Christian Schwarz <me@cschwarz.com>
Signed-off-by: InsanePrawn <insane.prawny@gmail.com>

closes #285
closes #276
closes #24
2021-02-20 17:20:45 +01:00
Christian Schwarz
1c937e58f7 zfs.NilBool: document its purpose and move it to its own package 'nodefault' 2021-02-20 17:04:57 +01:00
Christian Schwarz
70bbdfe760 zfs: ResumeToken: parse embedok, largeblockok, savedok if available
Developed for #285 but ultimately not used for it.
2021-02-20 17:04:57 +01:00
Christian Schwarz
c420f3c909 [#381] zfs: ListFilesystemVersions: make list filesystems version invocation deterministic
fixes #381
ref #379
2020-11-01 13:59:21 +01:00
Christian Schwarz
af2d6579c5 [#347] zfscmd: fix dangling trace Task on .Start() failure
fixes #347
2020-09-02 22:45:44 +02:00
Christian Schwarz
0f3da73ef1 [#347] zfscmd + zfs: define .Start() semantics, apply to call sites in pkg zfs
fixes #347
2020-09-02 22:45:44 +02:00
InsanePrawn
180c3d9ae1 Reformat all files with make format.
Signed-off-by: InsanePrawn <insane.prawny@gmail.com>
2020-08-31 23:57:45 +02:00
Christian Schwarz
0ee7a49d31 [#289] zfs: workaround for OpenZFS 0.7 dry send info with zero estimated size
fixes #289
2020-07-26 20:32:35 +02:00
Christian Schwarz
30cdc1430e replication + endpoint: replication guarantees: guarantee_{resumability,incremental,nothing}
This commit

- adds a configuration in which no step holds, replication cursors, etc. are created
- removes the send.step_holds.disable_incremental setting
- creates a new config option `replication` for active-side jobs
- adds the replication.protection.{initial,incremental} settings, each
  of which can have values
    - `guarantee_resumability`
    - `guarantee_incremental`
    - `guarantee_nothing`
  (refer to docs/configuration/replication.rst for semantics)

The `replication` config from an active side is sent to both endpoint.Sender and endpoint.Receiver
for each replication step. Sender and Receiver then act accordingly.

For `guarantee_incremental`, we add the new `tentative-replication-cursor` abstraction.
The necessity for that abstraction is outlined in https://github.com/zrepl/zrepl/issues/340.

fixes https://github.com/zrepl/zrepl/issues/340
2020-07-26 20:32:35 +02:00
Christian Schwarz
8ff83f2f1a [#342] endpoint: always create unencrypted placeholder filesystems
This "breaks" the use case of receiving an unencrypted send into an encrypted receiver by setting the receiver's `root_fs`'s `encryption=on`.
"breaks" in air-quotes because we have not yet released a version of
zrepl with encrypted send support.

We will bring back the featured outlined above in a future release.
See https://github.com/zrepl/zrepl/issues/342#issuecomment-657231818 and following.
2020-07-26 20:32:35 +02:00
Christian Schwarz
175ad1dd0b zfs: ZFSListFilesystemVersions: remove handling of io.ErrUnexpectedEOF
ZFSListChan returns (*DatasetDoesNotExist) for the case mentioned in the comment
2020-06-14 15:21:36 +02:00
Christian Schwarz
728e97700f zfs: fix error message formatting for send args validation 2020-06-14 15:21:36 +02:00
Christian Schwarz
6e927f20f9 [#321] platformtest: minimal integration tests for package replication
# Conflicts:
#	platformtest/tests/generated_cases.go
2020-06-14 15:21:36 +02:00
Christian Schwarz
292b85b5ef [#316] endpoint / replication protocol: more robust step-holds and replication cursor management
- drop HintMostRecentCommonAncestor rpc call
    - it is wrong to put faith into the active side of the replication to always make that call
      (we might not trust it, ref pull setup)
- clean up step holds + step bookmarks + replication cursor bookmarks on
  send RPC instead
    - this makes it symmetric with Receive RPC
- use a cache (endpoint.sendAbstractionsCache) to avoid the cost of
  listing the on-disk endpoint abstractions state on every step

The "create" methods for endpoint abstractions (CreateReplicationCursor, HoldStep) are now fully
idempotent and return an Abstraction.

Notes about endpoint.sendAbstractionsCache:
- fills lazily from disk state on first `Get` operation
- fill from disk is generally only attempted once
    - unless the `ListAbstractions` fails, in which case the fill from
      disk is retried on next `Get` (the current `Get` will observe a
      subset of the actual on-disk abstractions)
    - the `Invalidate` method is called
- it is a global (zrepl process-wide) cache

fixes #316
2020-06-14 15:21:36 +02:00
Christian Schwarz
10a14a8c50 [#307] add package trace, integrate it with logging, and adopt it throughout zrepl
package trace:

- introduce the concept of tasks and spans, tracked as linked list within ctx
    - see package-level docs for an overview of the concepts
    - **main feature 1**: unique stack of task and span IDs
        - makes it easy to follow a series of log entries in concurrent code
    - **main feature 2**: ability to produce a chrome://tracing-compatible trace file
        - either via an env variable or a `zrepl pprof` subcommand
        - this is not a CPU profile, we already have go pprof for that
        - but it is very useful to visually inspect where the
          replication / snapshotter / pruner spends its time
          ( fixes #307 )

usage in package daemon/logging:

- goal: every log entry should have a trace field with the ID stack from package trace

- make `logging.GetLogger(ctx, Subsys)` the authoritative `logger.Logger` factory function
    - the context carries a linked list of injected fields which
      `logging.GetLogger` adds to the logger it returns
    - `logging.GetLogger` also uses package `trace` to get the
      task-and-span-stack and injects it into the returned logger's fields
2020-05-19 11:30:02 +02:00
Christian Schwarz
0e5c77d2be [#277] rpc + zfs: drop zfs.StreamCopier, use io.ReadCloser instead 2020-05-18 19:46:24 +02:00
Christian Schwarz
70f9c6482f zfs: context propagation to ZFSListFilesystemVersions
fixup of 9568e46f05
2020-04-21 14:10:53 +02:00
Christian Schwarz
aed6149c8c zfscmd: fix crash in zfscmd_prometheus.go due to incorrectly extracted ProcessState
fixup of 96e188d7c4
refs #196
refs #301

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x9a472a]

goroutine 15826 [running]:
os.(*ProcessState).systemTime(...)
        /home/cs/go1.13/src/os/exec_unix.go:98
os.(*ProcessState).SystemTime(...)
        /home/cs/go1.13/src/os/exec.go:141
github.com/zrepl/zrepl/zfs/zfscmd.waitPostPrometheus(0xc000c04800, 0xe21ce0, 0xc000068270, 0xbf9f80d88107e861, 0x19bae710e6, 0x13a8b60)
        /home/cs/zrepl/zrepl/zfs/zfscmd/zfscmd_prometheus.go:69 +0x22a
github.com/zrepl/zrepl/zfs/zfscmd.(*Cmd).waitPost(0xc000c04800, 0xe21ce0, 0xc000068270)
        /home/cs/zrepl/zrepl/zfs/zfscmd/zfscmd.go:155 +0x18a
github.com/zrepl/zrepl/zfs/zfscmd.(*Cmd).CombinedOutput(0xc000c04800, 0xc0004b8270, 0xd02eea, 0x3, 0xc0001f6c40, 0x3)
        /home/cs/zrepl/zrepl/zfs/zfscmd/zfscmd.go:40 +0xb3
github.com/zrepl/zrepl/zfs.ZFSRelease(0xe36aa0, 0xc0004b8270, 0xc0009a3a40, 0x13, 0xc0004a5d00, 0x1, 0x1, 0xed62eb221, 0x13a8b60)
        /home/cs/zrepl/zrepl/zfs/holds.go:102 +0x2a7
github.com/zrepl/zrepl/endpoint.ReleaseStep(0xe36aa0, 0xc0004b8270, 0xc0004befc0, 0xe, 0xd08482, 0x8, 0xc0001cb02f, 0x2, 0x1eeea3bff89dc90b, 0x134d6, ...)
        /home/cs/zrepl/zrepl/endpoint/endpoint_zfs_abstraction_step.go:130 +0x367
github.com/zrepl/zrepl/endpoint.(*Sender).SendCompleted.func2(0xc000459190, 0xc000390e30, 0xc00041fd80, 0xc0004befc0, 0xe, 0xd08482, 0x8, 0xc0001cb02f, 0x2, 0x1eeea3bff89dc90b, ...)
        /home/cs/zrepl/zrepl/endpoint/endpoint.go:419 +0x1c3
created by github.com/zrepl/zrepl/endpoint.(*Sender).SendCompleted
        /home/cs/zrepl/zrepl/endpoint/endpoint.go:413 +0x776
2020-04-21 14:10:25 +02:00
Christian Schwarz
0834a184b8 zfscmd: do not do duplicate waitPre callbacks
it just makes sense that if we only dispatch one waitPost, we should
also only dispatch one waitPre
2020-04-21 14:10:18 +02:00
Christian Schwarz
e0b5bd75f8 endpoint: refactor, fix stale holds on initial replication failure, zfs-abstractions subcmd, more efficient ZFS queries
The motivation for this recatoring are based on two independent issues:

- @JMoVS found that the changes merged as part of #259 slowed his OS X
  based installation down significantly.
  Analysis of the zfs command logging introduced in #296 showed that
  `zfs holds` took most of the execution time, and they pointed out
  that not all of those `zfs holds` invocations were actually necessary.
  I.e.: zrepl was inefficient about retrieving information from ZFS.

- @InsanePrawn found that failures on initial replication would lead
  to step holds accumulating on the sending side, i.e. they would never
  be cleaned up in the HintMostRecentCommonAncestor RPC handler.
  That was because we only sent that RPC if there was a most recent
  common ancestor detected during replication planning.
  @InsanePrawn prototyped an implementation of a `zrepl zfs-abstractions release`
  command to mitigate the situation.
  As part of that development work and back-and-forth with @problame,
  it became evident that the abstractions that #259 built on top of
  zfs in package endpoint (step holds, replication cursor,
  last-received-hold), were not well-represented for re-use in the
  `zrepl zfs-abstractions release` subocommand prototype.

This commit refactors package endpoint to address both of these issues:

- endpoint abstractions now share an interface `Abstraction` that, among
  other things, provides a uniform `Destroy()` method.
  However, that method should not be destroyed directly but instead
  the package-level `BatchDestroy` function should be used in order
  to allow for a migration to zfs channel programs in the future.

- endpoint now has a query facitilty (`ListAbstractions`) which is
  used to find on-disk
    - step holds and bookmarks
    - replication cursors (v1, v2)
    - last-received-holds
  By describing the query in a struct, we can centralized the retrieval
  of information via the ZFS CLI and only have to be clever once.
  We are "clever" in the following ways:
  - When asking for hold-based abstractions, we only run `zfs holds` on
    snapshot that have `userrefs` > 0
    - To support this functionality, add field `UserRefs` to zfs.FilesystemVersion
      and retrieve it anywhere we retrieve zfs.FilesystemVersion from ZFS.
  - When asking only for bookmark-based abstractions, we only run
    `zfs list -t bookmark`, not with snapshots.
  - Currently unused (except for CLI) per-filesystem concurrent lookup
  - Option to only include abstractions with CreateTXG in a specified range

- refactor `endpoint`'s various ZFS info  retrieval methods to use
  `ListAbstractions`

- rename the `zrepl holds list` command to `zrepl zfs-abstractions list`
- make `zrepl zfs-abstractions list` consume endpoint.ListAbstractions

- Add a `ListStale` method which, given a query template,
  lists stale holds and bookmarks.
  - it uses replication cursor has different modes
- the new `zrepl zfs-abstractions release-{all,stale}` commands can be used
  to remove abstractions of package endpoint

- Adjust HintMostRecentCommonAncestor RPC for stale-holds cleanup:
    - send it also if no most recent common ancestor exists between sender and receiver
    - have the sender clean up its abstractions when it receives the RPC
      with no most recent common ancestor, using `ListStale`
    - Due to changed semantics, bump the protocol version.

- Adjust HintMostRecentCommonAncestor RPC for performance problems
  encountered by @JMoVS
    - by default, per (job,fs)-combination, only consider cleaning
      step holds in the createtxg range
      `[last replication cursor,conservatively-estimated-receive-side-version)`
    - this behavior ensures resumability at cost proportional to the
      time that replication was donw
    - however, as explained in a comment, we might leak holds if
      the zrepl daemon stops running
    - that  trade-off is acceptable because in the presumably rare
      this might happen the user has two tools at their hand:
    - Tool 1: run `zrepl zfs-abstractions release-stale`
    - Tool 2: use env var `ZREPL_ENDPOINT_SENDER_HINT_MOST_RECENT_STEP_HOLD_CLEANUP_MODE`
      to adjust the lower bound of the createtxg range (search for it in the code).
      The env var can also be used to disable hold-cleanup on the
      send-side entirely.

supersedes closes #293
supersedes closes #282
fixes #280
fixes #278

Additionaly, we fixed a couple of bugs:

- zfs: fix half-nil error reporting of dataset-does-not-exist for ZFSListChan and ZFSBookmark

- endpoint: Sender's `HintMostRecentCommonAncestor` handler would not
  check whether access to the specified filesystem was allowed.
2020-04-18 12:26:03 +02:00
Christian Schwarz
96e188d7c4 zfscmd: fix nil deref in waitPostLogging when command was killed
fixes #301
2020-04-08 00:26:56 +02:00
Christian Schwarz
1336c91865 zfs: introduce pkg zfs/zfscmd for command logging, status, prometheus metrics
refs #196
2020-04-05 20:47:25 +02:00
InsanePrawn
9568e46f05 zfs: use exec.CommandContext everywhere
Co-authored-by: InsanePrawn <insane.prawny@gmail.com>
2020-03-27 13:08:43 +01:00
InsanePrawn
44bd354eae Spellcheck all files
Signed-off-by: InsanePrawn <insane.prawny@gmail.com>
2020-02-24 16:06:09 +01:00
Christian Schwarz
1462c5caa5 zfs: fix batch destroy panic if all snaps are undestroyable
See https://github.com/zrepl/zrepl/pull/259#issuecomment-585334023

panic: runtime error: index out of range [0] with length 0
goroutine 14 [running]:
github.com/zrepl/zrepl/zfs.tryBatch(0xd6aa20, 0xc0000b8018, 0xc00025e0c0, 0x0, 0x6, 0xd61d80, 0x1280df8, 0xd58920, 0xc000132000)
        zrepl/zfs/versions_destroy.go:129 +0x302
github.com/zrepl/zrepl/zfs.doDestroyBatchedRec(0xd6aa20, 0xc0000b8018, 0xc000578a80, 0x6, 0x6, 0xd61d80, 0x1280df8)
        zrepl/zfs/versions_destroy.go:184 +0x4a5
github.com/zrepl/zrepl/zfs.doDestroyBatched(0xd6aa20, 0xc0000b8018, 0xc000222780, 0x6, 0x8, 0xd61d80, 0x1280df8)
        zrepl/zfs/versions_destroy.go:95 +0xc7
github.com/zrepl/zrepl/zfs.doDestroy(0xd6aa20, 0xc0000b8018, 0xc0005788d0, 0x6, 0x6, 0xd61d80, 0x1280df8)
        zrepl/zfs/versions_destroy.go:82 +0x362
github.com/zrepl/zrepl/zfs.ZFSDestroyFilesystemVersions(...)
        zrepl/zfs/versions_destroy.go:41
github.com/zrepl/zrepl/endpoint.doDestroySnapshots(0xd6aaa0, 0xc0004412c0, 0xc00057ca00, 0xc0005785a0, 0x6, 0x6, 0xb68940, 0xc5df01, 0xc000150a80)
        zrepl/endpoint/endpoint.go:785 +0x388
github.com/zrepl/zrepl/endpoint.(*Receiver).DestroySnapshots(0xc000127500, 0xd6aaa0, 0xc0004412c0, 0xc0002ca280, 0xc000150880, 0xd73ca0, 0xc00057c960)
        zrepl/endpoint/endpoint.go:751 +0xdb
github.com/zrepl/zrepl/daemon/pruner.doOneAttemptExec(0xc000429980, 0xc000429958, 0xc0001cb180)
        zrepl/daemon/pruner/pruner.go:531 +0x51f
github.com/zrepl/zrepl/daemon/pruner.doOneAttempt(0xc000429980, 0xc000429958)
        zrepl/daemon/pruner/pruner.go:486 +0x1064
github.com/zrepl/zrepl/daemon/pruner.(*Pruner).prune(0xc00011e280, 0xd6aaa0, 0xc0004412c0, 0x7f4906fff7e8, 0xc000127500, 0x7f4906fff738, 0xc0001324e0, 0xc000064420, 0x1, 0x1, ...)
        zrepl/daemon/pruner/pruner.go:214 +0x53
github.com/zrepl/zrepl/daemon/pruner.(*Pruner).Prune(...)
        zrepl/daemon/pruner/pruner.go:200
github.com/zrepl/zrepl/daemon/job.(*ActiveSide).do(0xc000268000, 0xd6a9e0, 0xc0002223c0)
        zrepl/daemon/job/active.go:482 +0x906
github.com/zrepl/zrepl/daemon/job.(*ActiveSide).Run(0xc000268000, 0xd6aaa0, 0xc000127080)
        zrepl/daemon/job/active.go:404 +0x289
github.com/zrepl/zrepl/daemon.(*jobs).start.func1(0xc000032200, 0xd73ca0, 0xc00000f2e0, 0xd6efa0, 0xc000268000, 0xd6aaa0, 0xc000126c90)
        zrepl/daemon/daemon.go:220 +0x121
created by github.com/zrepl/zrepl/daemon.(*jobs).start
        zrepl/daemon/daemon.go:216 +0x52e
2020-02-14 22:00:13 +01:00
Christian Schwarz
58c08c855f new features: {resumable,encrypted,hold-protected} send-recv, last-received-hold
- **Resumable Send & Recv Support**
  No knobs required, automatically used where supported.
- **Hold-Protected Send & Recv**
  Automatic ZFS holds to ensure that we can always resume a replication step.
- **Encrypted Send & Recv Support** for OpenZFS native encryption.
  Configurable at the job level, i.e., for all filesystems a job is responsible for.
- **Receive-side hold on last received dataset**
  The counterpart to the replication cursor bookmark on the send-side.
  Ensures that incremental replication will always be possible between a sender and receiver.

Design Doc
----------

`replication/design.md` doc describes how we use ZFS holds and bookmarks to ensure that a single replication step is always resumable.

The replication algorithm described in the design doc introduces the notion of job IDs (please read the details on this design doc).
We reuse the job names for job IDs and use `JobID` type to ensure that a job name can be embedded into hold tags, bookmark names, etc.
This might BREAK CONFIG on upgrade.

Protocol Version Bump
---------------------

This commit makes backwards-incompatible changes to the replication/pdu protobufs.
Thus, bump the version number used in the protocol handshake.

Replication Cursor Format Change
--------------------------------

The new replication cursor bookmark format is: `#zrepl_CURSOR_G_${this.GUID}_J_${jobid}`
Including the GUID enables transaction-safe moving-forward of the cursor.
Including the job id enables that multiple sending jobs can send the same filesystem without interfering.
The `zrepl migrate replication-cursor:v1-v2` subcommand can be used to safely destroy old-format cursors once zrepl has created new-format cursors.

Changes in This Commit
----------------------

- package zfs
  - infrastructure for holds
  - infrastructure for resume token decoding
  - implement a variant of OpenZFS's `entity_namecheck` and use it for validation in new code
  - ZFSSendArgs to specify a ZFS send operation
    - validation code protects against malicious resume tokens by checking that the token encodes the same send parameters that the send-side would use if no resume token were available (i.e. same filesystem, `fromguid`, `toguid`)
  - RecvOptions support for `recv -s` flag
  - convert a bunch of ZFS operations to be idempotent
    - achieved through more differentiated error message scraping / additional pre-/post-checks

- package replication/pdu
  - add field for encryption to send request messages
  - add fields for resume handling to send & recv request messages
  - receive requests now contain `FilesystemVersion To` in addition to the filesystem into which the stream should be `recv`d into
    - can use `zfs recv $root_fs/$client_id/path/to/dataset@${To.Name}`, which enables additional validation after recv (i.e. whether `To.Guid` matched what we received in the stream)
    - used to set `last-received-hold`
- package replication/logic
  - introduce `PlannerPolicy` struct, currently only used to configure whether encrypted sends should be requested from the sender
  - integrate encryption and resume token support into `Step` struct

- package endpoint
  - move the concepts that endpoint builds on top of ZFS to a single file `endpoint/endpoint_zfs.go`
    - step-holds + step-bookmarks
    - last-received-hold
    - new replication cursor + old replication cursor compat code
  - adjust `endpoint/endpoint.go` handlers for
    - encryption
    - resumability
    - new replication cursor
    - last-received-hold

- client subcommand `zrepl holds list`: list all holds and hold-like bookmarks that zrepl thinks belong to it
- client subcommand `zrepl migrate replication-cursor:v1-v2`
2020-02-14 22:00:13 +01:00
Christian Schwarz
e35320f8ee zfs: make StreamCopier wrapper for io.ReadCloser public 2020-02-14 21:42:03 +01:00
Christian Schwarz
6ebd9f1037 zfs: recv: fix deadlock if streamCopier returns io.EOF
Close the write end of the pipe
* before we start waiting on the error channels
* and after everything from streamCopier has been written to the pipe
2020-02-14 21:42:03 +01:00
Christian Schwarz
99ab16d7be zfs: send: improve error reporting by capturing stderr 2020-02-14 21:42:03 +01:00
Juergen Hoetzel
d35e2400b2 transport/{TCP,TLS}: optional IP_FREEBIND / IP_BINDANY bind socketops
Allows to bind to an address even if it is not actually (yet or ever)
configured. Fixes #238

Rationale:
https://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/#whatdoesthismeanformeadeveloper
2020-01-04 17:21:48 +01:00
chenhao
c396f9508a zfs: replace hard coded zfs command in ZFSDestroy
fixes #231
2019-10-16 10:22:53 +02:00
Christian Schwarz
b9933f6cb2 platformtest: add zfsGet bookmark handling & replicationCursor tests
This encodes the observation made in issue #230 :
In the ZFS version shipped in Ubuntu 16.04 where
`zfs get someprop a#bookmark` does not work.
2019-10-14 17:54:14 +02:00
Christian Schwarz
0ba4b5eda6 zfs: helper for ZFSGet guid and createtxg 2019-10-14 17:54:14 +02:00
Christian Schwarz
a6497b2c6e add platformtest: infrastructure for ZFS compatiblity testing 2019-09-14 13:43:46 +02:00
Christian Schwarz
07956c2299 zfs,endpoint: use zfs destroy batch syntax if available
refs #72
2019-09-14 13:43:46 +02:00
Christian Schwarz
e5f944c2f8 zfs: zfsGet: return *ZFSError on exec failure
refs #178
2019-09-07 20:12:46 +02:00
Christian Schwarz
d81a1818d6 endpoint: Receiver: only create placeholders below root_fs
fixes #195
2019-09-07 20:01:15 +02:00
Christian Schwarz
5b97953bfb run golangci-lint and apply suggested fixes 2019-03-27 13:12:26 +01:00