The new local transport uses socketpair() and a switchboard based on
client identities.
The special local job type is gone, which is good since it does not fit
into the 'Active/Passive side ' + 'mode' concept used to implement the
duality of push/sink | pull/source.
Just because taking one snapshot fails does not mean snapper needs to
stop for all others.
Since users are advised to monitor error logs, snapshot-taking errors
can still be addressed.
The ErrorWait mode allows a potential future Report / Status command to
distinguish normal waits from error waits.
Also:
- Defensive measures in control http server (1s timeouts)
(prevent the leak, even if request body is not closed)
- Add prometheus metrics to track control socket latencies
(were used for debugging)
A bookmark with a well-known name is used to track which version was
last successfully received by the receiver.
The createtxg that can be retrieved from the bookmark using `zfs get` is
used to set the Replicated attribute of each snap on the sender:
If the snap's CreateTXG > the cursor's, it is not yet replicated,
otherwise it has been.
There is an optional config option to change the behvior to
`CreateTXG >= the cursor's`, and the implementation defaults to that.
The reason: While things work just fine with `CreateTXG > the cursor's`,
ZFS does not provide size estimates in a `zfs send` dry run
(see acd2418).
However, to enable the use case of keeping the snapshot only around for
the replication, the config flag exists.