An instance of Task tracks a single thread of activity that is part of a Job.
While the docs already use this terminology of tasks being composed of jobs,
the code did not have an object to represent these semantics.
Now it does:
* A task t is initialized with a root activity, which is its name
* t can t.Enter() and t.Finish() an activity, building
a stack of activities
* t's code can get a logger t.Log() whose logTaskField is set to the
concatenated stack of activities
* t's code can update IO progress it made since leaving idle state
* t's code's log output vie t.Log() is captured since leaving idle
state
* FIXME: find a way to bound that buffer
refs #10
refs #48
Version is autodetected on build using git
If it cannot be detected with git, an override must be provided.
For tracability of distros, the distroy packagers should override as
well, which is why I added a README entry for package mainatiners.
refs #35
Abandons stderr special-casing:
* looks weird on shell and IO redirection to same file because of
interleaving of stdout and stderr
* better than a separate dedicated outlet because it does not require
additional configuration
fixes#28
BREAK SEMANTICS CONFIG
In contrast to any 'something<' mapping, a '<' mapping cannot be unique
Thus, '<' mappings are thus just an append to target, which is exactly
what we get when trimming empty prefix ''.
Otherwise, given mapping
{ "<": "storage/backups/app-srv" }
Before (clearly a conflict)
zroot => storage/backups/app-srv
storage => storage/backups/app-srv
After:
zroot => storage/backups/app-srv/zroot
storage => storage/backups/app-srv/storage
However, mapping directly with subtree wildcard is still possible, just
not with the root wildcard
{
"<" "storage/backups/app-srv"
"zroot/var/db<": "storage/db_replication/app-srv"
}
fixes#22
While filesystems is also not the right term (since it excludes ZVOLs),
we want to stay consistent with comments & terminology used in docs.
BREAK CONFIG
fixes#17
We lost the nice context-stack [jobname][taskname][...] at the beginning
of each log line when switching to logrus.
Define some field names that define these contexts.
Write a human-friendly formatter that presents these field names like
the solution we had before logrus.
Write some other formatters for logfmt and json output along the way.
Limit ourselves to stdout logging for now.
Implement
* pruning on source side
* local job
* test subcommand for doing a dry-run of a prune policy
* use a non-blocking callback from autosnap to trigger the depending
jobs -> avoids races, looks saner in the debug log
Done:
* implement autosnapper that asserts interval between snapshots
* implement pruner
* job pull: pulling + pruning
* job source: autosnapping + serving
TODO
* job source: pruning
* job local: everything
* fatal errors such as serve that cannot bind socket must be more
visible
* couldn't things that need a snapshotprefix just use a interface
Prefixer() instead? then we could have prefixsnapshotfilter and not
duplicate it every time...
* either go full context.Context or not at all...? just wait because
community climate around it isn't that great and we only need it for
cancellation? roll our own?
How it works:
`zrepl stdinserver CLIENT_IDENTITY`
* connects to the socket in $global.serve.stdinserver.sockdir/CLIENT_IDENTITY
* sends its stdin / stdout file descriptors to the `zrepl daemon` process (see cmsg(3))
* does nothing more
This enables a setup where `zrepl daemon` is not directly exposed to the
internet but instead all traffic is tunnelled through SSH.
The server with the source job has an authorized_keys file entry for the
public key used by the corresponding pull job
command="/mnt/zrepl stdinserver CLIENT_IDENTITY" ssh-ed25519 AAAAC3NzaC1E... zrepl@pullingserver
Don't use jobrun for daemon, just call JobDo() once, the job must
organize stuff itself.
Sacrifice all the oneshot commands, they will be reintroduced as
client-calls to the daemon.
The existing ByteStreamRPC requires writing RPC stub + server code
for each RPC endpoint. Does not scale well.
Goal: adding a new RPC call should
- not require writing an RPC stub / handler
- not require modifications to the RPC lib
The wire format is inspired by HTTP2, the API by net/rpc.
Frames are used for framing messages, i.e. a message is made of multiple
frames which are glued together using a frame-bridging reader / writer.
This roughly corresponds to HTTP2 streams, although we're happy with
just one stream at any time and the resulting non-need for flow control,
etc.
Frames are typed using a header. The two most important types are
'Header' and 'Data'.
The RPC protocol is built on top of this:
- Client sends a header => multiple frames of type 'header'
- Client sends request body => mulitiple frames of type 'data'
- Server reads a header => multiple frames of type 'header'
- Server reads request body => mulitiple frames of type 'data'
- Server sends response header => ...
- Server sends response body => ...
An RPC header is serialized JSON and always the same structure.
The body is of the type specified in the header.
The RPC server and client use some semi-fancy reflection tequniques to
automatically infer the data type of the request/response body based on
the method signature of the server handler; or the client parameters,
respectively.
This boils down to a special-case for io.Reader, which are just dumped
into a series of data frames as efficiently as possible.
All other types are (de)serialized using encoding/json.
The RPC layer and Frame Layer log some arbitrary messages that proved
useful during debugging. By default, they log to a non-logger, which
should not have a big impact on performance.
pprof analysis shows the implementation spends its CPU time
60% waiting for syscalls
30% in memmove
10% ...
On a Intel(R) Core(TM) i7-6600U CPU @ 2.60GHz CPU, Linux 4.12, the
implementation achieved ~3.6GiB/s.
Future optimization may include spice(2) / vmspice(2) on Linux, although
this doesn't fit so well with the heavy use of io.Reader / io.Writer
throughout the codebase.
The existing hackaround for local calls was re-implemented to fit the
new interface of PRCServer and RPCClient.
The 'R'PC method invocation is a bit slower because reflection is
involved inbetween, but otherwise performance should be no different.
The RPC code currently does not support multipart requests and thus does
not support the equivalent of a POST.
Thus, the switch to the new rpc code had the following fallout:
- Move request objects + constants from rpc package to main app code
- Sacrifice the hacky 'push = pull me' way of doing push
-> need to further extend RPC to support multipart requests or
something to implement this properly with additional interfaces
-> should be done after replication is abstracted better than separate
algorithms for doPull() and doPush()
config defines a single datastructure that can act both as a Map and as a Filter
(DatasetMapFilter)
Cleanup wildcard syntax along the way (also changes semantics).
Note the docs on the placeholder user property introduced with this
commit. The solution is not really satisfying but couldn't think of a
better one OTOMH