GCS gives NotImplemented errors for multi-part server side copies. The
threshold for these is currently set just below 5G so any files bigger
than 5G that rclone attempts to server side copy will fail.
This patch works around the problem by adding a quirk for GCS raising
--s3-copy-cutoff to the maximum. This means that rclone will never use
multi-part copies for files in GCS. This includes files bigger than
5GB which (according to AWS documentation) must be copied with
multi-part copy. However this seems to work with GCS.
See: https://forum.rclone.org/t/chunker-uploads-to-gcs-s3-fail-if-the-chunk-size-is-greater-than-the-max-part-size/44349/
See: https://issuetracker.google.com/issues/323465186
A seafile server can be configured to use a relative URL as
FILE_SERVER_ROOT in order to support more than one hostname/ip. (see
https://github.com/haiwen/seahub/issues/3398#issuecomment-506920360 )
The previous backend implementation always expected an absolute
download/upload URL, resulting in an "unsupported protocol scheme"
error.
With this commit it supports both absolute and relative.
It was reported that rclone copy occasionally uploaded corrupted data
to azure blob.
This turned out to be a race condition updating the block count which
caused blocks to be duplicated.
This bug was introduced in this commit in v1.64.0 and will be fixed in v1.65.2
0427177857 azureblob: implement OpenChunkWriter and multi-thread uploads #7056
This race only seems to happen if `--checksum` is used but can happen otherwise.
Unfortunately Azure blob does not check the MD5 that we send them so
despite sending incorrect data this corruption is not detected. The
corruption is detected when rclone tries to download the file, so
attempting to copy the files back to local disk will result in errors
such as:
ERROR : file.pokosuf5.partial: corrupted on transfer: md5 hash differ "XXX" vs "YYY"
This adds a check to test the blocklist we upload is as we expected
which would have caught the problem had it been in place earlier.
Similar to
acf1e2df84,
go1.21.4 appears to have broken sync.MoveDir on Windows because
filepath.VolumeName() returns `\\?` instead of `\\?\C:` in cleanRootPath. It
looks like the Go team is aware of the issue and planning a fix, so this may
only be needed temporarily.
When f.opt.MaxAge == 0, f.db is never set, however several methods later assume
it is set and attempt to access it, causing an invalid memory address error.
This change fixes the issue in a few spots (there may still be others I haven't
yet encountered.)
This fixes the Root() returned by the backend when it has returned
fs.ErrorIsFile.
Before this change it returned a root which included the file path.
Because Root() was wrong this caused the detection of the file being
moved over itself check to fail.
This adds an integration test to check it for all backends.
See: https://forum.rclone.org/t/rclone-move-chunker-dir-file-chunker-dir-deletes-all-file-chunks/43333/
Since `tokenRenewer` adds a Shutdown method, we should call it to
clean up resources.
changes backends:
onedrive,box,pcloud,amazonclouddrive,hidrive,jottacloud,sharefile
,premiumizeme
Signed-off-by: rkonfj <rkonfj@gmail.com>
This error was introduced in this commit when refactoring the list
routine.
b8591b230d onedrive: implement ListR method which gives --fast-list support
The error was caused by OneNote files not being skipped properly.
Before this change the IP address of the server was used in the SMB
connect request (see CloudSoda/go-smb2#18).
The updated library now can pass the hostname instead.
The update requires a small change in the dial method call.
Fixes rclone#6672
Before this change ListR was unconditionally enabled on onedrive.
This caused performance problems for some uses, so now the
--onedrive-delta flag has to be supplied.
Fixes#7362
Before this change PartialUploads was not set. This is clearly wrong
since incoming files are visible on the smb server.
Setting PartialUploads fixes the multithread upload modtime problem as
it uses the PartialUploads flag as an indication that it needs to set
the modtime explicitly.
This problem was detected by the new TestMultithreadCopy integration
tests
Fixes#7411
Before this change smb drives sometimes showed a fraction of the
correct size using `rclone about`.
This fixes the problem by switching the upstream library from
github.com/hirochachacha/go-smb2 to github.com/cloudsoda/go-smb2 which
has a fix for the problem.
The new library passes the integration tests.
Fixes#6733
Before this change, streaming files an exact multiple of the chunk
size would cause rclone to attempt to stream a 0 sized chunk which was
rejected by the b2 servers.
This bug was noticed by the new integration tests for chunked streaming.
Before this change the b2 servers would complain as this was only a
single part transfer.
This was noticed by the new integration tests for server side chunked copy.
This commit fixed the problem but made the integration tests fail.
33376bf399 dropbox: fix missing encoding for rclone purge
This fixes the problem properly by making sure we send the encoded or
non encoded root to the right places.
Before this change, the drive backend only used metadata if it was
created with Metadata enabled.
This patch changes it so the Metadata support is enabled dynamically
if it is set in the context.
This fixes the metadata tests in the integration tests which have been
changed to make sure Metadata is enabled.
Google drive doesn't allow the btime (created time) metadata to be
updated when updating an existing object.
This changes skips btime metadata if we are updating an existing
object but allows it otherwise.
- fetch metadata with listings and fetch permissions in parallel
- only write permissions out if they are not inherited.
- make setting labels, owner and permissions work controlled by flags
- `--drive-metadata-labels`, `--drive-metadata-owner`, `--drive-metadata-permissions`
- convert to directoryCache - makes backend much more efficient
- don't force --low-level-retries to 2
- don't wrap paced calls in pacer
- fix shouldRetry
- fix file list searching mechanism