Go can't redirect PROPFIND requests properly, it changes the method to
GET, so we disable redirects when reading the metadata and assume the
object does not exist if we receive a redirect.
This is to work-around the qnap redirecting requests for directories
without /.
Previously this was reading a stale hash from the object leading to
broken integration tests.
This fixes these integration tests TestSyncDoesntUpdateModtime,
TestSyncAfterChangingFilesSizeOnly, TestSyncAfterChangingContentsOnly,
TestSyncWithUpdateOlder, TestSyncUTFNorm.
Now that https://issuetracker.google.com/issues/64468406 has been
fixed, we can remove part of the workaround which fixed#1675 -
019adc35609c2136
This will make queries marginally more efficient. We still need the
other part of the workaround since the `=` operator is case
insensitive.
Before this commit rclone's handling of symlinks and junction points
under Windows was broken. rclone treated them as files and attempted
to transfer them which gave the error "The handle is invalid".
Ultimately the cause of this was 3e43ff7414 which was a
workaround so files with reparse points (which are a kind of symlink)
would transfer correctly.
The solution implemented is to revert the above commit which will mean
that #614 will break again. However there is now a work-around (which
will be signaled by rclone) to use the -L flag which wasn't available
when the original commit was made.
Fixes#2336
In the drive v3 conversion we forgot the IncludeTeamDriveItems
parameter when calling the changes API. Adding it fixes the changes
polling with team drives.
This implementation hopefully can handle all error requests from the
onedrive for business authentication.
I have only tested it with the "domain in unmanaged state" error.
This was caused by using the sftp.File.Read method which resets the
streaming window after each call. Replacing it with sftp.File.WriteTo
and an io.Pipe fixes the problem bringing the speed to the same as the
sftp binary.
This checks the checksum of the streamed encrypted data against the
checksum of the encrypted object returned from the remote and returns
an error if it is different.
* Fix errcheck and golint warnings
* Remove unused constants and fix comments
* Parse error responses properly
* Fix Open with RangeOption
* Fix Move, Copy and DirMove
* Implement DirCacheFlush
* Check interfaces are correct
* Remove debugs and update overview
* Correct feature flags
* Pare replacement characters down to the minimum set
* Add to the integration tests
* Add Mkdir, Rmdir, Purge, Delete, SetModTime, Copy, Move, DirMove
* Update file size after upload
* Add Open seek
* Set private permission for new folder and uploaded file
* Add docs
* Update List function
* Fix UserSessionInfo struct
* Fix socket leaks
* Don’t close resp.Body in Open method
* Get hash when listing files
The official drive APIs seem to have trouble downloading large
documents sometimes.
This commit adds a --drive-alternate-export flag to use an different,
unofficial set of export URLS which seem to download large files OK.
Google cloud storage doesn't normally need retries, however certain
things (eg bucket creation and removal) are rate limited and do
generate 429 errors.
Before this change the integration tests would regularly blow up with
errors from GCS rate limiting bucket creation and removal.
After this change we low level retry all operations using the same
exponential backoff strategy as used in the google drive backend.
Before this fix team drives would return the drive quota which is
incorrect and mis-leading.
Team drives don't appear to have an API for reading the bytes used or
the quota so we now return that the quota and usage are unknown.