Commit Graph

55 Commits

Author SHA1 Message Date
Christian Schwarz
1c937e58f7 zfs.NilBool: document its purpose and move it to its own package 'nodefault' 2021-02-20 17:04:57 +01:00
Christian Schwarz
596a39c0f5 bump golangci-lint to 1.35.2 and fix resulting lint errors
GO111MODULE=on golangci-lint run ./...
endpoint/endpoint.go:487:9: S1039: unnecessary use of fmt.Sprintf (gosimple)
                panic(fmt.Sprintf("ClientIdentityKey context value must be set"))
                      ^
platformtest/platformtest_ops.go:259:41: S1039: unnecessary use of fmt.Sprintf (gosimple)
                                return nil, &LineError{scan.Text(), fmt.Sprintf("unexpected tokens at EOL")}
                                                                    ^
platformtest/platformtest_ops.go:266:41: S1039: unnecessary use of fmt.Sprintf (gosimple)
                                return nil, &LineError{scan.Text(), fmt.Sprintf("unexpected tokens at EOL")}
                                                                    ^
util/optionaldeadline/optionaldeadline_test.go:97:50: SA1029: should not use built-in type string as key for value; define your own type to avoid collisions (staticcheck)
        pctx := context.WithValue(context.Background(), "key", "value")
                                                        ^
rpc/rpc_debug.go:8:5: var `debugEnabled` is unused (unused)
rpc/dataconn/dataconn_debug.go:8:5: var `debugEnabled` is unused (unused)
rpc/dataconn/frameconn/frameconn.go:42:9: S1039: unnecessary use of fmt.Sprintf (gosimple)
                panic(fmt.Sprintf("frame header is 8 bytes long"))
                      ^
platformtest/platformtest_ops.go:322:40: S1039: unnecessary use of fmt.Sprintf (gosimple)
                        return nil, &LineError{scan.Text(), fmt.Sprintf("unexpected tokens at EOL")}
2021-01-25 00:16:01 +01:00
Christian Schwarz
c3d87289bb [#388] util/semaphore: fix TestSemaphore test
fixes #388
2021-01-24 22:28:47 +01:00
InsanePrawn
180c3d9ae1 Reformat all files with make format.
Signed-off-by: InsanePrawn <insane.prawny@gmail.com>
2020-08-31 23:57:45 +02:00
Christian Schwarz
4b1b7a8561 envconst: queryable report of resolved variables + integration inot zrepl status --raw
fixes #299
refs #186
2020-06-14 15:26:05 +02:00
Christian Schwarz
10a14a8c50 [#307] add package trace, integrate it with logging, and adopt it throughout zrepl
package trace:

- introduce the concept of tasks and spans, tracked as linked list within ctx
    - see package-level docs for an overview of the concepts
    - **main feature 1**: unique stack of task and span IDs
        - makes it easy to follow a series of log entries in concurrent code
    - **main feature 2**: ability to produce a chrome://tracing-compatible trace file
        - either via an env variable or a `zrepl pprof` subcommand
        - this is not a CPU profile, we already have go pprof for that
        - but it is very useful to visually inspect where the
          replication / snapshotter / pruner spends its time
          ( fixes #307 )

usage in package daemon/logging:

- goal: every log entry should have a trace field with the ID stack from package trace

- make `logging.GetLogger(ctx, Subsys)` the authoritative `logger.Logger` factory function
    - the context carries a linked list of injected fields which
      `logging.GetLogger` adds to the logger it returns
    - `logging.GetLogger` also uses package `trace` to get the
      task-and-span-stack and injects it into the returned logger's fields
2020-05-19 11:30:02 +02:00
Christian Schwarz
f772b3d39f [#277] endpoint: Receiver.Receive: error message explaining problem with placeholders and encryption 2020-05-18 19:46:24 +02:00
Christian Schwarz
0e5c77d2be [#277] rpc + zfs: drop zfs.StreamCopier, use io.ReadCloser instead 2020-05-18 19:46:24 +02:00
Christian Schwarz
0280727985 [#277] replication/driver: enforce ordering during initial replication in order to support encrypted send
fixes #277
2020-05-18 19:46:24 +02:00
Christian Schwarz
e0b5bd75f8 endpoint: refactor, fix stale holds on initial replication failure, zfs-abstractions subcmd, more efficient ZFS queries
The motivation for this recatoring are based on two independent issues:

- @JMoVS found that the changes merged as part of #259 slowed his OS X
  based installation down significantly.
  Analysis of the zfs command logging introduced in #296 showed that
  `zfs holds` took most of the execution time, and they pointed out
  that not all of those `zfs holds` invocations were actually necessary.
  I.e.: zrepl was inefficient about retrieving information from ZFS.

- @InsanePrawn found that failures on initial replication would lead
  to step holds accumulating on the sending side, i.e. they would never
  be cleaned up in the HintMostRecentCommonAncestor RPC handler.
  That was because we only sent that RPC if there was a most recent
  common ancestor detected during replication planning.
  @InsanePrawn prototyped an implementation of a `zrepl zfs-abstractions release`
  command to mitigate the situation.
  As part of that development work and back-and-forth with @problame,
  it became evident that the abstractions that #259 built on top of
  zfs in package endpoint (step holds, replication cursor,
  last-received-hold), were not well-represented for re-use in the
  `zrepl zfs-abstractions release` subocommand prototype.

This commit refactors package endpoint to address both of these issues:

- endpoint abstractions now share an interface `Abstraction` that, among
  other things, provides a uniform `Destroy()` method.
  However, that method should not be destroyed directly but instead
  the package-level `BatchDestroy` function should be used in order
  to allow for a migration to zfs channel programs in the future.

- endpoint now has a query facitilty (`ListAbstractions`) which is
  used to find on-disk
    - step holds and bookmarks
    - replication cursors (v1, v2)
    - last-received-holds
  By describing the query in a struct, we can centralized the retrieval
  of information via the ZFS CLI and only have to be clever once.
  We are "clever" in the following ways:
  - When asking for hold-based abstractions, we only run `zfs holds` on
    snapshot that have `userrefs` > 0
    - To support this functionality, add field `UserRefs` to zfs.FilesystemVersion
      and retrieve it anywhere we retrieve zfs.FilesystemVersion from ZFS.
  - When asking only for bookmark-based abstractions, we only run
    `zfs list -t bookmark`, not with snapshots.
  - Currently unused (except for CLI) per-filesystem concurrent lookup
  - Option to only include abstractions with CreateTXG in a specified range

- refactor `endpoint`'s various ZFS info  retrieval methods to use
  `ListAbstractions`

- rename the `zrepl holds list` command to `zrepl zfs-abstractions list`
- make `zrepl zfs-abstractions list` consume endpoint.ListAbstractions

- Add a `ListStale` method which, given a query template,
  lists stale holds and bookmarks.
  - it uses replication cursor has different modes
- the new `zrepl zfs-abstractions release-{all,stale}` commands can be used
  to remove abstractions of package endpoint

- Adjust HintMostRecentCommonAncestor RPC for stale-holds cleanup:
    - send it also if no most recent common ancestor exists between sender and receiver
    - have the sender clean up its abstractions when it receives the RPC
      with no most recent common ancestor, using `ListStale`
    - Due to changed semantics, bump the protocol version.

- Adjust HintMostRecentCommonAncestor RPC for performance problems
  encountered by @JMoVS
    - by default, per (job,fs)-combination, only consider cleaning
      step holds in the createtxg range
      `[last replication cursor,conservatively-estimated-receive-side-version)`
    - this behavior ensures resumability at cost proportional to the
      time that replication was donw
    - however, as explained in a comment, we might leak holds if
      the zrepl daemon stops running
    - that  trade-off is acceptable because in the presumably rare
      this might happen the user has two tools at their hand:
    - Tool 1: run `zrepl zfs-abstractions release-stale`
    - Tool 2: use env var `ZREPL_ENDPOINT_SENDER_HINT_MOST_RECENT_STEP_HOLD_CLEANUP_MODE`
      to adjust the lower bound of the createtxg range (search for it in the code).
      The env var can also be used to disable hold-cleanup on the
      send-side entirely.

supersedes closes #293
supersedes closes #282
fixes #280
fixes #278

Additionaly, we fixed a couple of bugs:

- zfs: fix half-nil error reporting of dataset-does-not-exist for ZFSListChan and ZFSBookmark

- endpoint: Sender's `HintMostRecentCommonAncestor` handler would not
  check whether access to the specified filesystem was allowed.
2020-04-18 12:26:03 +02:00
InsanePrawn
44bd354eae Spellcheck all files
Signed-off-by: InsanePrawn <insane.prawny@gmail.com>
2020-02-24 16:06:09 +01:00
Christian Schwarz
58c08c855f new features: {resumable,encrypted,hold-protected} send-recv, last-received-hold
- **Resumable Send & Recv Support**
  No knobs required, automatically used where supported.
- **Hold-Protected Send & Recv**
  Automatic ZFS holds to ensure that we can always resume a replication step.
- **Encrypted Send & Recv Support** for OpenZFS native encryption.
  Configurable at the job level, i.e., for all filesystems a job is responsible for.
- **Receive-side hold on last received dataset**
  The counterpart to the replication cursor bookmark on the send-side.
  Ensures that incremental replication will always be possible between a sender and receiver.

Design Doc
----------

`replication/design.md` doc describes how we use ZFS holds and bookmarks to ensure that a single replication step is always resumable.

The replication algorithm described in the design doc introduces the notion of job IDs (please read the details on this design doc).
We reuse the job names for job IDs and use `JobID` type to ensure that a job name can be embedded into hold tags, bookmark names, etc.
This might BREAK CONFIG on upgrade.

Protocol Version Bump
---------------------

This commit makes backwards-incompatible changes to the replication/pdu protobufs.
Thus, bump the version number used in the protocol handshake.

Replication Cursor Format Change
--------------------------------

The new replication cursor bookmark format is: `#zrepl_CURSOR_G_${this.GUID}_J_${jobid}`
Including the GUID enables transaction-safe moving-forward of the cursor.
Including the job id enables that multiple sending jobs can send the same filesystem without interfering.
The `zrepl migrate replication-cursor:v1-v2` subcommand can be used to safely destroy old-format cursors once zrepl has created new-format cursors.

Changes in This Commit
----------------------

- package zfs
  - infrastructure for holds
  - infrastructure for resume token decoding
  - implement a variant of OpenZFS's `entity_namecheck` and use it for validation in new code
  - ZFSSendArgs to specify a ZFS send operation
    - validation code protects against malicious resume tokens by checking that the token encodes the same send parameters that the send-side would use if no resume token were available (i.e. same filesystem, `fromguid`, `toguid`)
  - RecvOptions support for `recv -s` flag
  - convert a bunch of ZFS operations to be idempotent
    - achieved through more differentiated error message scraping / additional pre-/post-checks

- package replication/pdu
  - add field for encryption to send request messages
  - add fields for resume handling to send & recv request messages
  - receive requests now contain `FilesystemVersion To` in addition to the filesystem into which the stream should be `recv`d into
    - can use `zfs recv $root_fs/$client_id/path/to/dataset@${To.Name}`, which enables additional validation after recv (i.e. whether `To.Guid` matched what we received in the stream)
    - used to set `last-received-hold`
- package replication/logic
  - introduce `PlannerPolicy` struct, currently only used to configure whether encrypted sends should be requested from the sender
  - integrate encryption and resume token support into `Step` struct

- package endpoint
  - move the concepts that endpoint builds on top of ZFS to a single file `endpoint/endpoint_zfs.go`
    - step-holds + step-bookmarks
    - last-received-hold
    - new replication cursor + old replication cursor compat code
  - adjust `endpoint/endpoint.go` handlers for
    - encryption
    - resumability
    - new replication cursor
    - last-received-hold

- client subcommand `zrepl holds list`: list all holds and hold-like bookmarks that zrepl thinks belong to it
- client subcommand `zrepl migrate replication-cursor:v1-v2`
2020-02-14 22:00:13 +01:00
Christian Schwarz
99ab16d7be zfs: send: improve error reporting by capturing stderr 2020-02-14 21:42:03 +01:00
Juergen Hoetzel
d35e2400b2 transport/{TCP,TLS}: optional IP_FREEBIND / IP_BINDANY bind socketops
Allows to bind to an address even if it is not actually (yet or ever)
configured. Fixes #238

Rationale:
https://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/#whatdoesthismeanformeadeveloper
2020-01-04 17:21:48 +01:00
Ross Williams
729c83ee72 pre- and post-snapshot hooks
* stack-based execution model, documented in documentation
* circbuf for capturing hook output
* built-in hooks for postgres and mysql
* refactor docs, too much info on the jobs page, too difficult
  to discover snapshotting & hooks

Co-authored-by: Ross Williams <ross@ross-williams.net>
Co-authored-by: Christian Schwarz <me@cschwarz.com>

fixes #74
2019-09-27 21:25:59 +02:00
Christian Schwarz
07956c2299 zfs,endpoint: use zfs destroy batch syntax if available
refs #72
2019-09-14 13:43:46 +02:00
Christian Schwarz
921b34235e daemon: env var for autostarting pprof endpoint 2019-09-07 19:50:57 +02:00
Christian Schwarz
000d8bba66 hotfix: limit concurrency of zfs send & recv commands
ATM, the replication logic sends all dry-run requests in parallel,
which might overwhelm the ZFS pool on the sending side.
Since we use rpc/dataconn for dry sends, this also opens one TCP
connection per dry-run request.

Use a sempahore to limit the degree of concurrency where we know it is a
problem ATM.
As indicated by the comments, the cleaner solution would involve some
kind of 'resource exhaustion' error code.

refs #161
refs #164
2019-03-28 22:17:12 +01:00
Christian Schwarz
5b97953bfb run golangci-lint and apply suggested fixes 2019-03-27 13:12:26 +01:00
Christian Schwarz
afed762774 format source tree using goimports 2019-03-22 19:41:12 +01:00
Christian Schwarz
5aefc47f71 daemon: remove last traces of watchdog mechanism 2019-03-19 18:15:34 +01:00
Christian Schwarz
c87759affe replication/driver: automatic retries on connectivity-related errors 2019-03-13 15:00:40 +01:00
Christian Schwarz
07b43bffa4 replication: refactor driving logic (no more explicit state machine) 2019-03-13 15:00:40 +01:00
Christian Schwarz
0230c6321f rpc/dataconn: microbenchmark 2019-03-13 13:57:21 +01:00
Christian Schwarz
796c5ad42d rpc rewrite: control RPCs using gRPC + separate RPC for data transfer
transport/ssh: update go-netssh to new version
    => supports CloseWrite and Deadlines
    => build: require Go 1.11 (netssh requires it)
2019-03-13 13:53:48 +01:00
Christian Schwarz
d281fb00e3 socketpair: directly export *net.UnixConn (and add test for that behavior) 2019-03-13 11:36:34 +01:00
Christian Schwarz
25c974f0b5 envconst: support for int64 2019-03-13 00:07:33 +01:00
Christian Schwarz
7a75a4d384 util/iocommand: timeout kill on close + other hardening 2018-12-11 21:06:54 +01:00
Christian Schwarz
190c7270d9 daemon/active + watchdog: simplify control flow using explicit ActiveSideState 2018-10-21 12:53:34 +02:00
Christian Schwarz
69bfcb7bed daemon/active: implement watchdog to handle stuck replication / pruners
ActiveSide.do() can only run sequentially, i.e. we cannot run
replication and pruning in parallel. Why?

* go-streamrpc only allows one active request at a time
(this is bad design and should be fixed at some point)
* replication and pruning are implemented independently, but work on the
same resources (snapshots)

A: pruning might destroy a snapshot that is planned to be replicated
B: replication might replicate snapshots that should be pruned

We do not have any resource management / locking for A and B, but we
have a use case where users don't want their machine fill up with
snapshots if replication does not work.
That means we _have_ to run the pruners.

A further complication is that we cannot just cancel the replication
context after a timeout and move on to the pruner: it could be initial
replication and we don't know how long it will take.
(And we don't have resumable send & recv yet).

With the previous commits, we can implement the watchdog using context
cancellation.
Note that the 'MadeProgress()' calls can only be placed right before
non-error state transition. Otherwise, we could end up in a live-lock.
2018-10-19 17:23:00 +02:00
Christian Schwarz
814fec60f0 endpoint + zfs: context cancellation of util.IOCommand instances (send & recv for now) 2018-10-19 16:12:21 +02:00
Christian Schwarz
a97684923a refactor: socketpair into utils package (useful elsewhere) 2018-10-11 21:17:43 +02:00
Christian Schwarz
976c1f3929 util.IOCommand: add stderr logging for unexpected crashes in calls to ProcessState.Sys()
Crashes observed on a FreeBSD 11.2 system

2018-09-27T05:08:39+02:00 [INFO][csnas]: start replication invocation="62"
2018-09-27T05:08:39+02:00 [INFO][csnas][repl]: start planning invocation="62"
2018-09-27T05:08:58+02:00 [INFO][csnas][repl]: start working invocation="62"
2018-09-27T05:09:57+02:00 [INFO][csnas]: start pruning sender invocation="62"
2018-09-27T05:10:11+02:00 [INFO][csnas]: start pruning receiver invocation="62"
2018-09-27T05:10:32+02:00 [INFO][csnas]: wait for wakeups
2018-09-27T06:08:39+02:00 [INFO][csnas]: start replication invocation="63"
2018-09-27T06:08:39+02:00 [INFO][csnas][repl]: start planning invocation="63"
2018-09-27T06:08:44+02:00 [INFO][csnas][repl]: start working invocation="63"
2018-09-27T06:08:49+02:00 [ERRO][csnas][repl]: receive request failed (might also be error on sender) invocation="63" filesystem="<REDACTED>" err="concurrent use of RPC connection" step="<REDACTED>(@zrepl_20180927_030838_000 => @zrepl_20180927_040835_000)" errType="*errors.errorString"
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x7d484b]

goroutine 3938545 [running]:
os.(*ProcessState).os.sys(...)
        /usr/lib/golang/src/os/exec_posix.go:78
os.(*ProcessState).Sys(...)
        /usr/lib/golang/src/os/exec.go:157
github.com/zrepl/zrepl/util.(*IOCommand).doWait(0xc4201b2d80, 0xc420070060, 0xc420070060)
        /go/github.com/zrepl/zrepl/util/iocommand.go:91 +0x4b
github.com/zrepl/zrepl/util.(*IOCommand).Read(0xc4201b2d80, 0xc420790000, 0x8000, 0x8000, 0x800c76d90, 0x0, 0xc420067c10)
        /go/github.com/zrepl/zrepl/util/iocommand.go:82 +0xe4
github.com/zrepl/zrepl/util.(*ByteCounterReader).Read(0xc4202dc580, 0xc420790000, 0x8000, 0x8000, 0x8c6900, 0x7cb201, 0xc420790000)
        /go/github.com/zrepl/zrepl/util/io.go:118 +0x51
github.com/zrepl/zrepl/vendor/github.com/problame/go-streamrpc.(*chunkBuffer).readChunk(0xc42057e3c0, 0x800d1bbf0, 0xc4202dc580, 0xc420790000, 0x8000, 0x8000)
        /go/github.com/zrepl/zrepl/vendor/github.com/problame/go-streamrpc/stream.go:58 +0x5e
github.com/zrepl/zrepl/vendor/github.com/problame/go-streamrpc.writeStream(0xa04620, 0xc4204a9c20, 0x9fe340, 0xc4200d6380, 0x800d1bbf0, 0xc4202dc580, 0x8000, 0xc42000e000, 0x900420)
        /go/github.com/zrepl/zrepl/vendor/github.com/problame/go-streamrpc/stream.go:101 +0x1ce
github.com/zrepl/zrepl/vendor/github.com/problame/go-streamrpc.(*Conn).send(0xc4200d6380, 0xa04620, 0xc4204a9c20, 0xc42057e2c0, 0xc42013d570, 0x800d1bbf0, 0xc4202dc580, 0x0, 0x0)
        /go/github.com/zrepl/zrepl/vendor/github.com/problame/go-streamrpc/main.go:374 +0x557
github.com/zrepl/zrepl/vendor/github.com/problame/go-streamrpc.(*Client).RequestReply.func1(0x999741, 0x7, 0xc4200d6380, 0xa04620, 0xc4204a9c20, 0xc42013d570, 0xa00aa0, 0xc4202dc580, 0xc420516480)
        /go/github.com/zrepl/zrepl/vendor/github.com/problame/go-streamrpc/client.go:169 +0x148
created by github.com/zrepl/zrepl/vendor/github.com/problame/go-streamrpc.(*Client).RequestReply
        /go/github.com/zrepl/zrepl/vendor/github.com/problame/go-streamrpc/client.go:167 +0x227
2018-09-27 12:06:59 +02:00
Anton Schirg
6ca11a7391 byte counter for status 2018-08-30 12:54:30 +02:00
Anton Schirg
add1b69809 move retentiongrid to own package 2018-08-26 22:06:47 +02:00
Christian Schwarz
e30ae972f4 gofmt 2018-08-25 21:30:25 +02:00
Christian Schwarz
a0b320bfeb streamrpc now requires net.Conn => use it instead of rwc everywhere 2018-08-08 13:09:51 +02:00
Christian Schwarz
1826535e6f WIP 2018-07-15 17:36:53 +02:00
Christian Schwarz
8cca0a8547 Initial working version
Summary:
* Logging is still bad
* test output in a lot of placed
* FIXMEs every where

Test Plan: None, just review

Differential Revision: https://phabricator.cschwarz.com/D2
2018-06-24 10:44:00 +02:00
Christian Schwarz
b69089a527 Puller: refactor + use Task API
* drop rx byte count functionality
* will be re-added to Task as necessary

refs #10
2017-12-27 14:39:47 +01:00
Christian Schwarz
bfcba7b281 cmd: logging using logrus 2017-09-22 17:01:54 +02:00
Christian Schwarz
93a58a36bf util: add PrefixLogger 2017-09-11 15:37:45 +02:00
Christian Schwarz
ca1a482e9e sshbytestream & IOCommand: fix handling of dead child process
SSH catches SIGTERM, tears down its connection, then exits with
platform-specific exit code.
2017-08-09 21:01:06 +02:00
Christian Schwarz
8eb4a2ba44 Rudimentary progress reporting on send / recv side. 2017-08-06 16:21:54 +02:00
Christian Schwarz
e0d39ddf11 Implement RetentionGrid structure. 2017-07-01 23:19:31 +02:00
Christian Schwarz
5f84d30972 util/ReadWriteCloserLogger: handle unset readlog | writelog 2017-05-20 19:39:32 +02:00
Christian Schwarz
04206ebd8b util.IOCommand: Close() gracefully via SIGTERM 2017-05-14 14:11:19 +02:00
Christian Schwarz
ee570bb060 refactor: consolidate ForkReader-like implementations to IOCommand 2017-05-14 12:27:15 +02:00
Christian Schwarz
6f84bf665d cmd: support logging reads & writes from sshbytestream to a file. 2017-05-13 15:34:28 +02:00
Christian Schwarz
74719ad846 rpc: chunk JSON parts of communication + refactoring
JSONDecoder was buffering more of connection data than just the JSON.
=> Unchunker didn't bother and just started unchunking.

While chaining JSONDecoder.Buffered() and the connection using
ChainedReader works, it's still not a clean architecture.

=> Every JSON message is now wrapped in a chunked stream
   (chunked and unchunked)
   => no special-cases
=> Keep ChainedReader, might be useful later on...
2017-05-13 15:33:46 +02:00