Allows to compress short arbitrary strings and returns a string using base64 url encoding.
Generator for tables included and a few samples has been added. Add more to init.go
Tested with fuzzing for crash resistance and symmetry, see fuzz.go
1. adds SharedOptions data structure to oauthutil
2. adds config.ConfigToken option to oauthutil.SharedOptions
3. updates the backends that have oauth functionality
Fixes#2849
Before this change there was lots of duplicated code in all the
dircache using backends to support DirMove.
This change factors this code into the dircache library.
Dircache was changed to:
- Remove special cases for the root directory
- Remove Fatal errors
- Call FindRoot on behalf of the user wherever possible
- Bring up to modern Go standards
Backends were changed to:
- Remove calls to FindRoot
- Change calls to FindRootAndPath to FindPath
- Don't make special cases for the root
This fixes several corner cases, for example removing a non existent
directory if FindRoot hasn't been called.
Before this change we passed both lpOverlapped and lpBytesReturned as NULL.
> If lpOverlapped is NULL, lpBytesReturned cannot be NULL. Even when
> an operation produces no output data, and lpOutBuffer can be NULL,
> the DeviceIoControl function makes use of the variable pointed to by
> lpBytesReturned. After such an operation, the value of the variable
> is without meaning.
After this change we set lpBytesReturned to a valid pointer.
See: https://forum.rclone.org/t/errors-when-downloading-any-file-over-250mb-from-google-drive-windows-sparse-files/16889
It appends "--" to the rclone authorize command line before client_id,
in case that the client_id or client_secret has a prefix of "-"
(OneDrive's does), which affects the argument parsing.
On google fs (drive, google photos, and google cloud storage), if
headless is selected, do not open browser.
This also supplies a new option "auth-no-open-browser" for authorize
if the user does not want it.
This should fix#3323.
Before this change two users could run `rclone config` for the same
backend on the same machine at the same time.
User A would get as far as starting the web server. User B would then
fail to start the webserver, but it would open the browser on the
/auth URL which would redirect the user to the login. This would then
cause user B to authenticate to user A's rclone.
This changes fixes the problem in two ways.
Firstly it passes the state to the /auth call before redirecting and
checks it there, erroring with a 403 error if it doesn't match. This
would have fixed the problem on its own.
Secondly it delays the opening of the web browser until after the auth
webserver has started which prevents the user entering the credentials
if another auth server is running.
Fixes#3573
This fixes a crash on the google photos backend when an error is
returned from the rest.Call function.
This turned out to be a mis-understanding of the rest docs so
- improved rest.Call docs
- fixed mis-understanding in google photos backend
- fixed similar mis-understading in onedrive backend
Before this change, if you passed a io.ReadCloser to opt.Body then the
transaction would close it. This happens as part of http.NewRequest
which documents that the io.Reader passed in will be upgraded to a
Closer if possible and closed as part of the Do call.
After this change, we wrap any io.ReadClosers to stop them being
upgraded. This means that they will never get closed and that the
caller should always close them.
This fixes a panic in the googlephotos integration tests.
This was factored from fstest as we were including the testing
enviroment into the main binary because of it.
This was causing opening the browser to fail because of 8243ff8bc8.
- Change rclone/fs interfaces to accept context.Context
- Update interface implementations to use context.Context
- Change top level usage to propagate context to lover level functions
Context propagation is needed for stopping transfers and passing other
request-scoped values.
In the Documentation it states:
// If (opts.MultipartParams or opts.MultipartContentName) and
// opts.Body are set then CallJSON will do a multipart upload with a
// file attached.
Make the pacer package more flexible by extracting the pace calculation
functions into a separate interface. This also allows to move features
that require the fs package like logging and custom errors into the fs
package.
Also add a RetryAfterError sentinel error that can be used to signal a
desired retry time to the Calculator.
This will mean rclone tracks the minimum sleep values more precisely
when it isn't rate limiting.
Allowing burst is good for some backends (eg Google Drive).
Before this change if ContentLength was set in the options but 0 then
we would upload using chunked encoding. Fix this to always upload
with a "Content-Length" header even if the size is 0.
Remove workarounds for this from b2 and onedrive backends.
This fixes the issue for the webdav backend described here:
https://forum.rclone.org/t/code-500-errors-with-webdav-nextcloud/8440/
* drive: don't run teamdrive config if auto confirm set
* onedrive: don't run extra config if auto confirm set
* make Confirm results customisable by config
Fixes#1010
Normally os.OpenFile under Windows does not allow renaming or deleting
open file handles. This package provides equivelents for os.OpenFile,
os.Open and os.Create which do allow that.
This means that rclone will pick up tokens from concurrently running
rclones. This helps for Box which only allows each refresh token to
be used once.
Without this fix, rclone caches the refresh token at the start of the
run, then when the token expires the refresh token may have been used
already by a concurrently running rclone.
This also will retry the oauth up to 5 times at 1 second intervals.
See: https://forum.rclone.org/t/box-token-refresh-timing/8175
Before this change doing a remote config using rclone authorize gave
this error. The token is saved a bit later anyway so the error is
needlessly confusing.
ERROR : Failed to save new token in config file: section 'remote' not found.
This commit suppresses that error.
https://forum.rclone.org/t/onedrive-for-business-failed-to-save-token/8061
Before this change the rest package would forward all the headers on
an HTTP redirect, including the Authorization: header. This caused
problems when forwarded to a signed S3 URL ("Only one auth mechanism
allowed") as well as being a potential security risk.
After we use the go1.8+ mechanism for doing this instead of using our
own which does it correctly removing the Authorization: header when
redirecting to a different host.
This hasn't fixed the behaviour for rclone compiled with go1.7.
Fixes#2635
The race detector currently detects a race with len(chan) against
close(chan).
See: https://github.com/golang/go/issues/27070
Skip the tests which trip this bug under the race detector.
This unifies the 3 methods of reading config
* command line
* environment variable
* config file
And allows them all to be configured in all places. This is done by
making the []fs.Option in the backend registration be the master
source of what the backend options are.
The backend changes are:
* Use the new configmap.Mapper parameter
* Use configstruct to parse it into an Options struct
* Add all config to []fs.Option including defaults and help
* Remove all uses of pflag
* Remove all uses of config.FileGet