Google cloud storage doesn't normally need retries, however certain
things (eg bucket creation and removal) are rate limited and do
generate 429 errors.
Before this change the integration tests would regularly blow up with
errors from GCS rate limiting bucket creation and removal.
After this change we low level retry all operations using the same
exponential backoff strategy as used in the google drive backend.
Before this fix team drives would return the drive quota which is
incorrect and mis-leading.
Team drives don't appear to have an API for reading the bytes used or
the quota so we now return that the quota and usage are unknown.
If md5sum/sha1sum fails we debug what it outputed on stderr and return
an empty hash indicating we didn't have a hash, rather than
hash.ErrUnsupported indicating that we don't support this hash type.
This fixes lots of ERROR messages for sftp and synology NAS which,
while it supports md5sum the SFTP paths and the SSH paths are
different so md5sum doesn't work.
We also stop disabling md5sum/sha1sum on errors since typically Hashes
is only checked at the start of a sync run and isn't expected to
change dynamically.
This enables the use of the SharePoint webdav endpoint provided by
OneDrive for Business or Office365 Education Accounts. It enables
unverified accounts to be accessed with rclone via webdav as it isn't
possible through the normal onedrive backend.
This integrates the https://github.com/hensur/onedrive-cookie-test
package to fetch the required cookies to authorize against the
SharePoint webdav endpoint.
Very large directories can have their sizes returned as floating point
numbers, eg `1.0034576985781e+14` from the box API.
Before this change this would fail to parse as an int64.
This change parses the size as a float64 instead which will be
perfectly accurate for sizes up to 2**56 which is about 9 PB.
It is unknown whether box themselves use a float64 as an intermediate
representation in the API or not - it seems likely.
Fixes#2261
* Implement about for:
* local, crypt, cache, drive, swift, hubic, onedrive, pcloud, dropbox
* Implement `--json` and `---full` flag for `rclone about`
* change About interface to return a Usage structure
* Remove operations.About as it is too thin an interface
* Implement Integration test
Relates to #1138 and #1564
This bug was introduced by the v3 API conversion in 07f20dd1fd.
The problem was that dircache.FindPath doesn't work for the root directory.
This adds an internal error for dircache.FindPath being called with
the root directory. This makes a failing test, which the fix to the
drive backend fixes.
This also improves the DirCache integration test.
These are AWS, Ceph, Dreamhost, IBM COS S3, Minio, Wasabi and Other.
This configures endpoints where known and makes sure config doesn't
appear where it isn't valid where possible.
This introduces a method of making provider specific configuration
within a remote. This is useful particularly in s3.
This commit does the basic configuration in S3 for IBM COS.
Before this change we lowercased the dropbox root directory. This was
likely a leftover from when we used to build a dictionary to translate
the cases of dropbox files. Now with the v2 API we can rely on
dropbox to do that for us, so we no longer need to lowercase the root.
This fixes issues using crypt with name obfuscation on dropbox.
Before this change asynchronous closes in cmount could cause sharing
violations under Windows on Remove which manifest themselves
frequently as test failures.
This change lets the Remove be retried on a sharing violation under
Windows.