The API has changed in the directory move call JSON response from
returning a TaskID as a string to returning it as an integer. In other
places it is still returned as a string though.
This patch allows the TaskID to be an integer or a string in the JSON
response and keeps it internally as a string like before.
The test fails because it expects a copy with MaxTransfer and CutoffModeHard should
return fatal error, because this is thrown from accounting (ErrorMaxTransferLimitReachedFatal),
but in case of Google Drive the external google API catches and replaces it with a
non-fatal error:
pw.CloseWithError(fmt.Errorf("googleapi: Copy failed: %v", err))
(7290f25351/internal/gensupport/media.go (L140))
The test TestIntegration/FsMkdir/FsPutFiles/FromRoot/ListR fails in
the integration test because there is a broken bucket in the test
account which support haven't been able to remove.
This tries to fix the integration tests by only allowing one
premiumizeme test to run at once, in the hope it will stop rclone
hitting the rate limits and breaking the tests.
See: #5734
The TestIntegration/FsMkdir/FsPutFiles/PublicLink test doesn't work on
a standard onedrive account, it returns
accessDenied: accountUpgradeRequired: Account Upgrade is required for this operation.
See: #5734
The TestIntegration/FsMkdir/FsPutFiles/PublicLink test doesn't work on
a standard dropbox account, only on an enterprise account because it
sets expiry dates.
See: #5734
After testing concurrent calling of `kv.Start` and `db.Stop` I had to restrict
more parts of these under mutex to make results deterministic without Sleep's
in the test body. It's more safe but has potential to lock Start for up to
2 seconds due to `db.open`.
Before this change we checked that features.ReadMimeTime was set if
and only if the Object.MimeType method was implemented.
However this test is overly general - we don't care if Objects
advertise MimeType when features.ReadMimeTime is set provided that
they always return an empty string (which is what a wrapping backend
might do).
This patch implements that logic.
Before this change the backoff for the error_background error was 6
seconds. This means that if it wasn't resolved in 60 seconds (with the
default 10 low level retries) then an error was reported.
This error was being reported frequently in the integration tests, so
is likely affecting real users too.
This patch changes the backoff into an exponential backoff
1,2,4,8...1024 seconds to make sure we wait long enough for the
background operation to complete.
See #5734
- setup correct path encoding (fixes backend test FsEncoding)
- ignore range option if file is empty (fixes VFS test TestFileReadAtZeroLength)
- cleanup stray files left after failed upload (fixes test FsPutError)
- rebase code on master, adapt backend for rclone context passing
- translate Siad errors to rclone native FS errors in sia errorHandler
- TestSia: return proper backend options from the script
- TestSia: use uptodate AntFarm image, nebulouslabs/siaantfarm is stale
In
05f128868f azureblob: add --azureblob-no-head-object
we incorrectly parsed the size of the object as the Content-Length of
the returned header. This is incorrect in the presense of Range
requests.
This fixes the problem by parsing the Content-Range header if
avaialble to read the correct length from if a Range request was
issued.
See: #5734
This reverts commit
dc06973796 Revert "s3: use rclone's low level retries instead of AWS SDK to fix listing retries"
Which in turn reverted
5470d34740 "backend/s3: use low-level-retries as the number of SDK retries"
So we are back where we started.
It then modifies it to set the AWS SDK to `--low-level-retries`
retries, but set the rclone retries to 2 so that directory listings
can be retried.