This fixes a crash on the google photos backend when an error is
returned from the rest.Call function.
This turned out to be a mis-understanding of the rest docs so
- improved rest.Call docs
- fixed mis-understanding in google photos backend
- fixed similar mis-understading in onedrive backend
Before this change, if you passed a io.ReadCloser to opt.Body then the
transaction would close it. This happens as part of http.NewRequest
which documents that the io.Reader passed in will be upgraded to a
Closer if possible and closed as part of the Do call.
After this change, we wrap any io.ReadClosers to stop them being
upgraded. This means that they will never get closed and that the
caller should always close them.
This fixes a panic in the googlephotos integration tests.
This was factored from fstest as we were including the testing
enviroment into the main binary because of it.
This was causing opening the browser to fail because of 8243ff8bc8.
- Change rclone/fs interfaces to accept context.Context
- Update interface implementations to use context.Context
- Change top level usage to propagate context to lover level functions
Context propagation is needed for stopping transfers and passing other
request-scoped values.
In the Documentation it states:
// If (opts.MultipartParams or opts.MultipartContentName) and
// opts.Body are set then CallJSON will do a multipart upload with a
// file attached.
Make the pacer package more flexible by extracting the pace calculation
functions into a separate interface. This also allows to move features
that require the fs package like logging and custom errors into the fs
package.
Also add a RetryAfterError sentinel error that can be used to signal a
desired retry time to the Calculator.
This will mean rclone tracks the minimum sleep values more precisely
when it isn't rate limiting.
Allowing burst is good for some backends (eg Google Drive).
Before this change if ContentLength was set in the options but 0 then
we would upload using chunked encoding. Fix this to always upload
with a "Content-Length" header even if the size is 0.
Remove workarounds for this from b2 and onedrive backends.
This fixes the issue for the webdav backend described here:
https://forum.rclone.org/t/code-500-errors-with-webdav-nextcloud/8440/
* drive: don't run teamdrive config if auto confirm set
* onedrive: don't run extra config if auto confirm set
* make Confirm results customisable by config
Fixes#1010
Normally os.OpenFile under Windows does not allow renaming or deleting
open file handles. This package provides equivelents for os.OpenFile,
os.Open and os.Create which do allow that.
This means that rclone will pick up tokens from concurrently running
rclones. This helps for Box which only allows each refresh token to
be used once.
Without this fix, rclone caches the refresh token at the start of the
run, then when the token expires the refresh token may have been used
already by a concurrently running rclone.
This also will retry the oauth up to 5 times at 1 second intervals.
See: https://forum.rclone.org/t/box-token-refresh-timing/8175
Before this change doing a remote config using rclone authorize gave
this error. The token is saved a bit later anyway so the error is
needlessly confusing.
ERROR : Failed to save new token in config file: section 'remote' not found.
This commit suppresses that error.
https://forum.rclone.org/t/onedrive-for-business-failed-to-save-token/8061
Before this change the rest package would forward all the headers on
an HTTP redirect, including the Authorization: header. This caused
problems when forwarded to a signed S3 URL ("Only one auth mechanism
allowed") as well as being a potential security risk.
After we use the go1.8+ mechanism for doing this instead of using our
own which does it correctly removing the Authorization: header when
redirecting to a different host.
This hasn't fixed the behaviour for rclone compiled with go1.7.
Fixes#2635