Commit Graph

964 Commits

Author SHA1 Message Date
e87ce3f7cf cmd: no context + logging for config parsing 2017-09-22 14:13:30 +02:00
af2ff08940 docs: document UNIX sockets & job debugging 2017-09-18 01:01:51 +02:00
458c28e1d0 cmd: UNIX sockets: try to autoremove stale sockets 2017-09-18 00:16:28 +02:00
eaed271a00 cmd: config: remove annoying parser logs 2017-09-18 00:16:28 +02:00
3eaba92025 cmd: introduce control socket & subcommand
Move pprof debugging there.
2017-09-18 00:16:28 +02:00
aea62a9d85 cmd: extract listening on a UNIX socket in a private directory into a helper func 2017-09-17 23:41:51 +02:00
1a62d635a6 cmd: test: would always run testCmdGlobalInit 2017-09-17 23:40:40 +02:00
9cd83399d3 cmd: remove global state in main.go
* refactoring
* Now supporting default config locations
2017-09-17 18:32:00 +02:00
4ac7e78e2b cmd: config: was using wrong reference to config 2017-09-17 17:45:02 +02:00
71650819d3 cmd: remove stderrFile option 2017-09-17 17:25:24 +02:00
3fd9726719 docs: keep up with changed reality.
ugly hack with relativ URLs because relref is apparently broken when
linking to section pages (_index.md) except for a few cases...
2017-09-17 16:18:39 +02:00
6a05e101cf WIP daemon:
Implement
* pruning on source side
* local job
* test subcommand for doing a dry-run of a prune policy

* use a non-blocking callback from autosnap to trigger the depending
jobs -> avoids races, looks saner in the debug log
2017-09-16 21:13:19 +02:00
b168274048 fixup dmf tests 2017-09-16 20:32:01 +02:00
cd4e09ebb3 cmd: handler: privatise & rename variables 2017-09-16 20:27:08 +02:00
e3ec093d53 cmd: handler: check FilesystemVersionFilter as part of ACL 2017-09-16 20:24:46 +02:00
dc3378e890 cmd: daemon: use closure-local variable when starting job 2017-09-16 20:21:05 +02:00
36b66f6fd7 cmd: mapfilter: support rejecting mappings
breaking config
2017-09-16 19:43:02 +02:00
e70b6f3071 WIP: recurring jobs
Done:

* implement autosnapper that asserts interval between snapshots
* implement pruner

* job pull: pulling + pruning
* job source: autosnapping + serving

TODO

* job source: pruning
* job local: everything
* fatal errors such as serve that cannot bind socket must be more
visible
* couldn't things that need a snapshotprefix just use a interface
Prefixer() instead? then we could have prefixsnapshotfilter and not
duplicate it every time...
* either go full context.Context or not at all...? just wait because
community climate around it isn't that great and we only need it for
cancellation? roll our own?
2017-09-15 19:35:19 +02:00
c6ca1efaae cmd: fix typo 2017-09-15 19:34:38 +02:00
0acb2e9ec0 cmd: fix missing error message 2017-09-15 19:32:09 +02:00
5faafbb1b4 cmd: noprune prune policy 2017-09-15 19:32:09 +02:00
e2149de840 cmd: automatic inverting of DatasetMapFilter 2017-09-13 22:55:23 +02:00
1deaa459c8 config: unify job debugging options 2017-09-11 15:45:10 +02:00
93a58a36bf util: add PrefixLogger 2017-09-11 15:37:45 +02:00
d76d3db0b3 handler: remove unused SinkMappingFunc 2017-09-11 13:51:19 +02:00
0a53b2415f signal handling for source job 2017-09-11 13:50:35 +02:00
ce25c01c7e implement stdinserver command + corresponding server
How it works:

`zrepl stdinserver CLIENT_IDENTITY`
 * connects to the socket in $global.serve.stdinserver.sockdir/CLIENT_IDENTITY
 * sends its stdin / stdout file descriptors to the `zrepl daemon` process (see cmsg(3))
 * does nothing more

This enables a setup where `zrepl daemon` is not directly exposed to the
internet but instead all traffic is tunnelled through SSH.
The server with the source job has an authorized_keys file entry for the
public key used by the corresponding pull job

 command="/mnt/zrepl stdinserver CLIENT_IDENTITY" ssh-ed25519 AAAAC3NzaC1E... zrepl@pullingserver
2017-09-11 13:48:07 +02:00
f3689563b5 config: restructure in 'jobs' and 'global' section 2017-09-11 13:43:18 +02:00
fa4d2098a8 rpc: re-architect connection teardown
Tear down occurs on each protocol level, stack-wise.

Open RWC
Open ML (with NewMessageLayer)
Open RPC (with NewServer/ NewClient)
Close RPC (with Close() from Client())
Close ML
* in Server: after error / receive of Close request
* in Client: after getting ACK for Close request from Server
Close RWC

To achieve this, a DataType for RPC control messages was added, which
has a separate set of endpoints. Not exactly pretty, but works for now.

The necessity of the RST frame remains to be determined. However, it is
nice to have a way to signal the other side something went terribly
wrong in the middle of an operation. Example: A frameBridingWriter fails
to read the next chunk of a file it is supposed to send, it can just
send an RST frame to signal this operation failed... Wouldn't trailers
make sense then?
2017-09-11 10:54:56 +02:00
73c9033583 WIP: Switch to new config format.
Don't use jobrun for daemon, just call JobDo() once, the job must
organize stuff itself.

Sacrifice all the oneshot commands, they will be reintroduced as
client-calls to the daemon.
2017-09-10 17:53:54 +02:00
8bf3516003 Extend sampleconf, explain what stdinserver serve type does. 2017-09-10 16:01:45 +02:00
0df47b0b0a move config.go to config_old.go 2017-09-09 21:57:20 +02:00
b2f3645bfd alternative prototype for new config format 2017-09-07 11:18:06 +02:00
98fc59dbd5 prototype new config format 2017-09-06 12:46:33 +02:00
64b4901eb0 cmd test: dump config using pretty printer 2017-09-02 12:52:56 +02:00
7e442ea0ea cmd: remove legacy NoMatchError 2017-09-02 12:40:22 +02:00
70258fbada cmd: add 'test' subcommand
configbreak
2017-09-02 12:30:03 +02:00
287e0620ba mapfilter: actually set filterOnly property 2017-09-02 12:22:34 +02:00
8f03e97d47 prototype daemon 2017-09-02 11:08:24 +02:00
4a00bef40b prune: use zfs destroy with sanity check 2017-09-02 11:08:24 +02:00
fee2071514 autosnap: fix pathname 2017-09-02 11:08:24 +02:00
e048386cd5 cmd: add repeat config option to Prune 2017-09-02 11:08:24 +02:00
8a96267ef4 jobrun: use notificationChannel instead of logger for communicating events 2017-09-02 11:08:24 +02:00
f8979d6e83 jobrun/cmd: implement jobrun.Job for config objects 2017-09-02 11:08:24 +02:00
582ae83da3 cmd: remove RunCmd 2017-09-01 19:29:19 +02:00
3070d156a3 jobrun: rename to jobmetadata 2017-09-01 19:29:19 +02:00
6ab05ee1fa reimplement io.ReadWriteCloser based RPC mechanism
The existing ByteStreamRPC requires writing RPC stub + server code
for each RPC endpoint. Does not scale well.

Goal: adding a new RPC call should

- not require writing an RPC stub / handler
- not require modifications to the RPC lib

The wire format is inspired by HTTP2, the API by net/rpc.

Frames are used for framing messages, i.e. a message is made of multiple
frames which are glued together using a frame-bridging reader / writer.
This roughly corresponds to HTTP2 streams, although we're happy with
just one stream at any time and the resulting non-need for flow control,
etc.

Frames are typed using a header. The two most important types are
'Header' and 'Data'.

The RPC protocol is built on top of this:

- Client sends a header         => multiple frames of type 'header'
- Client sends request body     => mulitiple frames of type 'data'
- Server reads a header         => multiple frames of type 'header'
- Server reads request body     => mulitiple frames of type 'data'
- Server sends response header  => ...
- Server sends response body    => ...

An RPC header is serialized JSON and always the same structure.
The body is of the type specified in the header.

The RPC server and client use some semi-fancy reflection tequniques to
automatically infer the data type of the request/response body based on
the method signature of the server handler; or the client parameters,
respectively.
This boils down to a special-case for io.Reader, which are just dumped
into a series of data frames as efficiently as possible.
All other types are (de)serialized using encoding/json.

The RPC layer and Frame Layer log some arbitrary messages that proved
useful during debugging. By default, they log to a non-logger, which
should not have a big impact on performance.

pprof analysis shows the implementation spends its CPU time
        60% waiting for syscalls
        30% in memmove
        10% ...

On a Intel(R) Core(TM) i7-6600U CPU @ 2.60GHz CPU, Linux 4.12, the
implementation achieved ~3.6GiB/s.

Future optimization may include spice(2) / vmspice(2) on Linux, although
this doesn't fit so well with the heavy use of io.Reader / io.Writer
throughout the codebase.

The existing hackaround for local calls was re-implemented to fit the
new interface of PRCServer and RPCClient.
The 'R'PC method invocation is a bit slower because reflection is
involved inbetween, but otherwise performance should be no different.

The RPC code currently does not support multipart requests and thus does
not support the equivalent of a POST.

Thus, the switch to the new rpc code had the following fallout:

- Move request objects + constants from rpc package to main app code
- Sacrifice the hacky 'push = pull me' way of doing push
-> need to further extend RPC to support multipart requests or
     something to implement this properly with additional interfaces
-> should be done after replication is abstracted better than separate
     algorithms for doPull() and doPush()
2017-09-01 19:24:53 +02:00
e5b713ce5b docs: pattern syntax: more precise terminology 2017-08-11 18:45:39 +02:00
64baa3915f docs: bump theme 2017-08-11 18:44:53 +02:00
d9064d46f6 docs: improve welcome page 2017-08-09 23:42:50 +02:00