After re-organising the config it became apparent that there was a bug
in the config system which hadn't manifested until now.
This was the default config overriding the main config and was fixed
by noting when the defaults had actually changed.
Apparently fmt.Sscanln doesn't parse bool's properly and this isn't
likely to be fixed by the Go team who regard sscanf as a mistake.
This only uses sscan for integers and uses the correct routine for
everything else.
This also implements parsing time.Duration
See: https://github.com/golang/go/issues/43306
Previously this used `rclone test makefiles --seed 0` which sets a
random seed and every now and again we get this error
Failed to open file "$WORK\\src\\moru": open $WORK\src\moru: is a directory
Because a file with the same name was created as a file in the src and
a dir in the dst.
This fixes it by using determinstic seeds each time.
Before this change backends which supported more than one hash (eg
pcloud) or backends which wrapped backends supporting more than one
hash (combine) would fail the TestMultithreadCopy and
TestMultithreadCopyAbort with an error like
Failed to make new multi hasher: requested set 000001ff contains unknown hash types
This was caused by the tests limiting the globally available hashes to
the first hash supplied by the backend.
This was added in this commit
d5d28a7513 operations: fix overwrite of destination when multi-thread transfer fails
to overcome the tests taking >100s on the local backend because they
made every single hash that the local backend. It brought this time
down to 20s.
This commit fixes the problem and retains the CPU speedup by only
applying the fix from the original commit if the destination backend
is the local backend. This fixes the common case (testing on the local
backend). This does not fix the problem for a backend which wraps the
local backend (eg combine) but this is run only on the integration
test machine and not on all the CI.
Before this fix we attempted to copy metadata to SFTP backends despite
them not being capable of it.
This fixes the problem by making the need to copy metadata explicit
rather than implicit in a value being present or not.
In this commit
6a0a54ab97 operations: fix missing metadata for multipart transfers to local disk
We broke the setting of modification times when doing multipart
transfers from a backend which didn't support metadata to a backend
which did support metadata.
This was fixed by setting the "mtime" in the metadata if it was
missing.
The --metadata-mapper was being called twice for files that rclone
needed to stream to disk,
This happened only for:
- files bigger than --upload-streaming-cutoff
- on backends which didn't support PutStream
This also meant that these were being logged as two transfers which
was a little strange.
This fixes the problem by not using operations.Copy to upload the file
once it has been streamed to disk, instead using the Put method on the
backend.
This should have no effect on reliability of the transfers as we retry
Put if possible.
This also tidies up the Rcat function to make the different ways of
uploading the data clearer and make it easy to see that it gets
verified on all those paths.
See #7848
Before this change on files which have unknown length (like Google
Documents) the SrcFsType would be set to "memoryFs".
This change fixes the problem by getting the Copy function to pass the
src Fs into a variant of Rcat.
Fixes#7848
Before this change, attempting to copy a large google doc while using
the metadata mapper caused a panic. Google doc files use Rcat to
download as they have an unknown size, and when the size of the doc
file got above --streaming-upload-cutoff it used a
object.NewStaticObjectInfo with a `nil` Fs to upload the file which
caused the crash in the metadata mapper code.
This change makes sure that the Fs in object.NewStaticObjectInfo is
never nil, and it returns MemoryFs which is consistent with the Rcat
code when the source is sized below the --streaming-upload-cutoff
threshold.
Fixes#7845
Before this change multipart downloads to the local disk with
--metadata failed to have their metadata set properly.
This was because the OpenWriterAt interface doesn't receive metadata
when creating the object.
This patch fixes the problem by using the recently introduced
Object.SetMetadata method to set the metadata on the object after the
download has completed (when using --metadata). If the backend we are
copying to is using OpenWriterAt but the Object doesn't support
SetMetadata then it will write an ERROR level log but complete
successfully. This should not happen at the moment as only the local
backend supports metadata and OpenWriterAt but it may in the future.
It also adds a test to check metadata is preserved when doing
multipart transfers.
Fixes#7424
Before this change an `rclone lsjson --encrypted` command where
additional `--crypt-` parameters were supplied on the command line:
rclone lsjson --crypt-description XXX --encrypted secret:
Produced an error like this:
Failed to lsjson: ListJSON failed to load config for crypt remote: config name contains invalid characters...
This was due to an incorrect lookup of the crypt config to create the
encrypted mapping.
Fixes#7833
Before this change we synced directories regardless if the source
directory existed. It is irrelevant whether the source directory
exists or not, what we need to know is has the directory been
modified.
Co-authored-by: nielash <nielronash@gmail.com>
Before this change we used the same datastructure for managing empty
directories for both --create-empty-src-dirs in sync/copy/move and for
the --delete-empty-src-dirs flag in move.
These two uses are subtly incompatible and this change uses a separate
datastructure for both uses. This makes it more accurate and easier to
understand.