Before this change, if not using shared key or SAS URL authentication
for the source, rclone gave this error
ManagedIdentityCredential.GetToken() requires exactly one scope
when doing server side copies.
This was introduced in:
3a5ddfcd3c azureblob: implement multipart server side copy
This fixes the problem by creating a temporary SAS URL using user
delegation to read the source blob when copying.
Fixes#8662
The seafile backend used to be able to cope with files called "." and
".." but at some point became unable to do so, causing integration
test failurs.
This adds EncodeDot to the encoding which encodes "." and ".." names.
Linkbox have started issuing 302 redirects on some of their PUT
requests when rclone uploads a file.
This is problematic for several reasons:
1. This is the wrong redirect code - it should be 307 to preserve the method
2. Since Expect/100-Continue isn't supported the whole body gets uploaded
This fixes the problem by first doing a HEAD request on the URL. This
will allow us to read the redirect Location and not upload the body to
the wrong place.
It should still work (albeit a little more inefficiently) if Linkbox
stop redirecting the PUT requests.
See: https://forum.rclone.org/t/linkbox-upload-error/51795Fixes: #8606
This removes
- TestCompressSwift: - never finishes - too slow - we have TestCompressS3 instead
- TestCryptSwift: - never finishes - too slow - we have TestCryptS3 instead
- TestChunkerChunk50bBox: - often times out - covered by other tests
This ocurred whenever there were more than 100 files in the source due
to the output channel filling up.
The fix is not to use list.NewSorter but take more care to output the
dst objects in the same order the src objects are delivered. As the
src objects are delivered sorted, no sorting is needed.
In order not to cause another deadlock, we need to send nil dst
objects which is safe since this adjusts the termination conditions
for the channels.
Thanks to @jeremy for the test script the Go tests are based on.
This commit improves error handling in two specific scenarios:
* Missing Download Links: A 5-second delay is introduced when a download
link is missing, as low-level retries aren't enough. Empirically, it
takes about 30s-1m for the link to become available. This resolves
failed integration tests: backend: TestIntegration/FsMkdir/FsPutFiles/
ObjectUpdate, vfs: TestFileReadAtNonZeroLength
* Unrecoverable 500 Errors: The shouldRetry method is updated to skip
retries for 500 errors from "idx.shub.mypikpak.com" indicating "no
record for gcid." These errors are non-recoverable, so retrying is futile.
This commit introduces a significant rewrite of PikPak's upload, specifically
targeting direct handling of file uploads rather than relying on the generic
S3 manager. The primary motivation is to address critical upload failures
reported in #8629.
* Added new `multipart.go` file for multipart uploads using AWS S3 SDK.
* Removed dependency on AWS S3 manager; replaced with custom handling.
* Updated PikPak test package with new multipart upload tests,
including configurable chunk size and upload cutoff.
* Added new configuration option `upload_cutoff` to control chunked uploads.
* Defined constraints for `chunk_size` and `upload_cutoff` (min/max values,
validation).
* Adjusted default `upload_concurrency` from 5 to 4.
In this commit the source of the modtime got changed to the wrong object by accident
0b9671313b webdav: add an ownCloud Infinite Scale vendor that enables tus chunked upload support
This reverts that change and fixes the integration tests.
This adds a _connect_delay=5s which allows the server to startup
properly. It also makes sure it stores its config in /tmp rather than
the current working directory.
Before this change, if RetryAfterError was called with a nil err, then
it's Error method would return this when wrapped in a fmt.Errorf
statement
error %!v(PANIC=Error method: runtime error: invalid memory address or nil pointer dereference))
Looking at the code, it looks like RetryAfterError will usually be
called with a nil pointer, so this patch makes sure it has a sensible
error.
Before this change, using convmv to convert filenames between NFD and NFC could
fail on certain backends (such as onedrive) that were insensitive to the
difference. This change fixes the issue by extending the existing
needsMoveCaseInsensitive logic for use in this scenario.
This change adds a truncate_bytes mode which counts the number of bytes, as
opposed to the number of UTF-8 characters. This can be useful for ensuring that a
crypt-encoded filename will not exceed the underlying backend's length limits
(see https://forum.rclone.org/t/any-clear-file-name-length-when-using-crypt/36930 ).
This change also adds support for _keep_extension when using truncate and
truncate_bytes.
Before this change, convmv dry runs would log a SkipDestructive message for
every single object, even objects that would not really be moved during a real
run. This made it quite difficult to tell what would actually happen during the
real run. This change fixes that by returning silently in such cases (as would
happen during a real run.)
In convmv, src and dst can point to the same directory. Unless a dir's name is
changing, we should leave it alone and not attempt to copy its metadata to
itself.
In
b1d774c2e3 combine: implement ListP interface
We introduced the ListP interface to the combine backend. This was
passing the wrong remote to the upstreams. This was picked up by the
integration tests but was ignored by accident.