2020-05-24 17:43:42 +02:00
|
|
|
// Code generated by zrepl tooling; DO NOT EDIT.
|
|
|
|
|
|
|
|
package tests
|
|
|
|
|
|
|
|
var Cases = []Case{BatchDestroy,
|
|
|
|
CreateReplicationCursor,
|
|
|
|
GetNonexistent,
|
2020-05-24 17:49:16 +02:00
|
|
|
HoldsWork,
|
2020-05-24 17:43:42 +02:00
|
|
|
IdempotentBookmark,
|
|
|
|
IdempotentDestroy,
|
|
|
|
IdempotentHold,
|
|
|
|
ListFilesystemVersionsFilesystemNotExist,
|
|
|
|
ListFilesystemVersionsTypeFilteringAndPrefix,
|
|
|
|
ListFilesystemVersionsUserrefs,
|
|
|
|
ListFilesystemVersionsZeroExistIsNotAnError,
|
|
|
|
ListFilesystemsNoFilter,
|
2020-05-20 13:10:37 +02:00
|
|
|
ReceiveForceIntoEncryptedErr,
|
2020-05-24 17:49:16 +02:00
|
|
|
ReceiveForceRollbackWorksUnencrypted,
|
2020-07-26 17:58:20 +02:00
|
|
|
ReplicationFailingInitialParentProhibitsChildReplication,
|
2020-05-20 13:10:37 +02:00
|
|
|
ReplicationIncrementalCleansUpStaleAbstractionsWithCacheOnSecondReplication,
|
|
|
|
ReplicationIncrementalCleansUpStaleAbstractionsWithoutCacheOnSecondReplication,
|
2020-06-01 14:39:59 +02:00
|
|
|
ReplicationIncrementalDestroysStepHoldsIffIncrementalStepHoldsAreDisabledButStepHoldsExist,
|
fix handling of tenative cursor presence if protection strategy doesn't use it (#714)
Before this PR, we would panic in the `check` phase of `endpoint.Send()`'s `TryBatchDestroy` call in the following cases: the current protection strategy does NOT produce a tentative replication cursor AND
* `FromVersion` is a tentative cursor bookmark
* `FromVersion` is a snapshot, and there exists a tentative cursor bookmark for that snapshot
* `FromVersion` is a bookmark != tentative cursor bookmark, but there exists a tentative cursor bookmark for the same snapshot as the `FromVersion` bookmark
In those cases, the `check` concluded that we would delete `FromVersion`.
It came to that conclusion because the tentative cursor isn't part of `obsoleteAbs` if the protection strategy doesn't produce a tentative replication cursor.
The scenarios above can happen if the user changes the protection strategy from "with tentative cursor" to one "without tentative replication cursor", while there is a tentative replication cursor on disk.
The workaround was to rename the tentative cursor.
In all cases above, `TryBatchDestroy` would have destroyed the tentative cursor.
In case 1, that would fail the `Send` step and potentially break replication if the cursor is the last common bookmark. The `check` conclusion was correct.
In cases 2 and 3, deleting the tentative cursor would have been fine because `FromVersion` was a different entity than the tentative cursor. So, destroying the tentative cursor would be the right call.
The solution in this PR is as follows:
* add the `FromVersion` to the `liveAbs` set of live abstractions
* rewrite the `check` closure to use the full dataset path (`fullpath`) to identify the concrete ZFS object instead of the `zfs.FilesystemVersionEqualIdentity`, which is only identified by matching GUID.
* Holds have no dataset path and are not the `FromVersion` in any case, so disregard them.
fixes #666
2023-07-04 20:21:48 +02:00
|
|
|
ReplicationIncrementalHandlesFromVersionEqTentativeCursorCorrectly,
|
2020-05-24 18:18:02 +02:00
|
|
|
ReplicationIncrementalIsPossibleIfCommonSnapshotIsDestroyed,
|
2022-05-01 14:46:38 +02:00
|
|
|
ReplicationInitialAll,
|
|
|
|
ReplicationInitialFail,
|
|
|
|
ReplicationInitialMostRecent,
|
2020-06-27 23:53:33 +02:00
|
|
|
ReplicationIsResumableFullSend__both_GuaranteeResumability,
|
|
|
|
ReplicationIsResumableFullSend__initial_GuaranteeIncrementalReplication_incremental_GuaranteeIncrementalReplication,
|
|
|
|
ReplicationIsResumableFullSend__initial_GuaranteeResumability_incremental_GuaranteeIncrementalReplication,
|
2021-11-21 21:16:37 +01:00
|
|
|
ReplicationPlaceholderEncryption__EncryptOnReceiverUseCase__WorksIfConfiguredWithInherit,
|
|
|
|
ReplicationPlaceholderEncryption__UnspecifiedIsOkForClientIdentityPlaceholder,
|
|
|
|
ReplicationPlaceholderEncryption__UnspecifiedLeadsToFailureAtRuntimeWhenCreatingPlaceholders,
|
2020-09-07 01:20:57 +02:00
|
|
|
ReplicationPropertyReplicationWorks,
|
2020-07-26 12:24:05 +02:00
|
|
|
ReplicationReceiverErrorWhileStillSending,
|
2020-06-27 23:53:33 +02:00
|
|
|
ReplicationStepCompletedLostBehavior__GuaranteeIncrementalReplication,
|
|
|
|
ReplicationStepCompletedLostBehavior__GuaranteeResumability,
|
2020-05-24 17:43:42 +02:00
|
|
|
ResumableRecvAndTokenHandling,
|
|
|
|
ResumeTokenParsing,
|
rework resume token validation to allow resuming from raw sends of unencrypted datasets
Before this change, resuming from an unencrypted dataset with
send.raw=true specified wouldn't work with zrepl due to overly
restrictive resume token checking.
An initial PR to fix this was made in https://github.com/zrepl/zrepl/pull/503
but it didn't address the core of the problem.
The core of the problem was that zrepl assumed that if a resume token
contained `rawok=true, compressok=true`, the resulting send would be
encrypted. But if the sender dataset was unencrypted, such a resume would
actually result in an unencrypted send.
Which could be totally legitimate but zrepl failed to recognize that.
BACKGROUND
==========
The following snippets of OpenZFS code are insightful regarding how the
various ${X}ok values in the resume token are handled:
- https://github.com/openzfs/zfs/blob/6c3c5fcfbe27d9193cd131753cc7e47ee2784621/module/zfs/dmu_send.c#L1947-L2012
- https://github.com/openzfs/zfs/blob/6c3c5fcfbe27d9193cd131753cc7e47ee2784621/module/zfs/dmu_recv.c#L877-L891
- https://github.com/openzfs/zfs/blob/6c3c5fc/lib/libzfs/libzfs_sendrecv.c#L1663-L1672
Basically, some zfs send flags make the DMU send code set some DMU send
stream featureflags, although it's not a pure mapping, i.e, which DMU
send stream flags are used depends somewhat on the dataset (e.g., is it
encrypted or not, or, does it use zstd or not).
Then, the receiver looks at some (but not all) feature flags and maps
them to ${X}ok dataset zap attributes.
These are funnelled back to the sender 1:1 through the resume_token.
And the sender turns them into lzc flags.
As an example, let's look at zfs send --raw.
if the sender requests a raw send on an unencrypted dataset, the send
stream (and hence the resume token) will not have the raw stream
featureflag set, and hence the resume token will not have the rawok
field set. Instead, it will have compressok, embedok, and depending
on whether large blocks are present in the dataset, largeblockok set.
WHAT'S ZREPL'S ROLE IN THIS?
============================
zrepl provides a virtual encrypted sendflag that is like `raw`,
but further ensures that we only send encrypted datasets.
For any other resume token stuff, it shoudn't do any checking,
because it's a futile effort to keep up with ZFS send/recv features
that are orthogonal to encryption.
CHANGES MADE IN THIS COMMIT
===========================
- Rip out a bunch of needless checking that zrepl would do during
planning. These checks were there to give better error messages,
but actually, the error messages created by the endpoint.Sender.Send
RPC upon send args validation failure are good enough.
- Add platformtests to validate all combinations of
(Unencrypted/Encrypted FS) x (send.encrypted = true | false) x (send.raw = true | false)
for cases both non-resuming and resuming send.
Additional manual testing done:
1. With zrepl 0.5, setup with unencrypted dataset, send.raw=true specified, no send.encrypted specified.
2. Observe that regular non-resuming send works, but resuming doesn't work.
3. Upgrade zrepl to this change.
4. Observe that both regular and resuming send works.
closes https://github.com/zrepl/zrepl/pull/613
2022-07-10 14:56:35 +02:00
|
|
|
SendArgsValidationEE_EncryptionAndRaw,
|
2020-09-05 16:04:34 +02:00
|
|
|
SendArgsValidationEncryptedSendOfUnencryptedDatasetForbidden__EncryptionSupported_false,
|
|
|
|
SendArgsValidationEncryptedSendOfUnencryptedDatasetForbidden__EncryptionSupported_true,
|
2020-05-24 17:43:42 +02:00
|
|
|
SendArgsValidationResumeTokenDifferentFilesystemForbidden,
|
|
|
|
SendArgsValidationResumeTokenEncryptionMismatchForbidden,
|
zfs: rewrite SendStream, fix bug in Close() on FreeBSD, add platformtests
This commit was motivated by https://github.com/zrepl/zrepl/issues/495
where, on FreeBSD with OpenZFS 2.0, a SendStream.Close() call might wait indefinitely for `zfs send` to exit.
The reason is that, due to the refactoring done for redacted send & recv
(https://github.com/openzfs/zfs/commit/30af21b02569ac192f52ce6e6511015f8a8d5729),
the `dump_bytes` function, which writes to the pipe, executes in a separate thread (synctask taskq) iff not `HAVE_LARGE_STACKS`.
The `zfs send` process/thread waits for that taskq thread using an uninterruptible primitive.
So when we SIGKILL `zfs send`, that signal doesn't reach the right thread to interrupt the pipe write.
Theoretically this affects both Linux and FreeBSD, but most Linux users `HAVE_LARGE_STACKS` and since https://github.com/penzfs/zfs/pull/12350/files OpenZFS on FreeBSD `HAVE_LARGE_STACKS` as well.
However, at least until FreeBSD 13.1, possibly for the entire 13 lifecycle, we're going to have to live with that oddity.
Measures taken in this commit:
- Report the behavior as an upstream bug https://github.com/openzfs/zfs/issues/12500
- Change SendStream code so that it closes zrepl's read-end of the pipe (see comment in code)
- Clean up and make explicit SendStream's state handling
- Write extensive platformtests for SendStream
- They pass on my Linux install and on FreeBSD 12
- FreeBSD 13 still needs testing.
fixes https://github.com/zrepl/zrepl/issues/495
2021-08-20 17:43:28 +02:00
|
|
|
SendStreamCloseAfterBlockedOnPipeWrite,
|
|
|
|
SendStreamCloseAfterEOFRead,
|
|
|
|
SendStreamMultipleCloseAfterEOF,
|
|
|
|
SendStreamMultipleCloseBeforeEOF,
|
|
|
|
SendStreamNonEOFReadErrorHandling,
|
2020-05-24 17:43:42 +02:00
|
|
|
UndestroyableSnapshotParsing,
|
|
|
|
}
|