This commit reorganises the oauth code to use our own config struct
which has all the info for the normal oauth method and also the client
credentials flow method.
It updates all backends which use lib/oauthutil to use the new config
struct which shouldn't change any functionality.
It also adds code for dealing with the client credential flow config
which doesn't require the use of a browser and doesn't have or need a
refresh token.
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
This changes the OpenWriterAt implementation to make client/fd
handling atomic.
This PR stabilizes the situation of bigger files and multi-threaded
uploads. The root cause boils down to the old "fun" property of
pclouds fileops API: sessions are bound to TCP connections. This
forces us to use a http client with only a single connection
underneath.
With large files, we reuse the same connection for each chunk. If that
connection interrupts (e.g. because we are talking through the
internet), all chunks will fail. The probability for latter one
increases with larger files.
As the point of the whole multi-threaded feature was to speed-up large
files in the first place, this change pulls the client creation (and
hence connection handling) into each chunk. This should stabilize the
situation, as each chunk (and retry) gets its own connection.
CEPH uses a special bucket form `tenant:bucket` for multitentant
access using S3 as documented here:
https://docs.ceph.com/en/reef/radosgw/multitenancy/#s3
However when doing multipart uploads, in the reply from
`CreateMultipart` the `tenant:` was missing from the `Bucket` response
rclone was using to build the `UploadPart` request. This caused a 404
failure return. This may be a CEPH bug, but it is easy to work around.
This changes the code to use the `Bucket` and `Key` that we used in
`CreateMultipart` in `UploadPart` rather than the one returned from
`CreateMultipart` which fixes the problem.
See: https://forum.rclone.org/t/rclone-zcat-does-not-work-with-a-multitenant-ceph-backend/48618
Before this change attempting to download a file with
`Content-Encoding: gzip` from Cloudflare R2 gave this error
corrupted on transfer: sizes differ src 0 vs dst 999
This was caused by the SDK v2 overriding our attempt to set
`Accept-Encoding: gzip`.
This fixes the problem by disabling the middleware that does that
overriding.
Before this change, if writing to a local backend with --metadata and
--links, if the incoming metadata contained mode or ownership
information then rclone would apply the mode/ownership to the
destination of the link not the link itself.
This fixes the problem by using the link safe sycall variants
lchown/fchmodat when --links and --metadata is in use. Note that Linux
does not support setting permissions on symlinks, so rclone emits a
debug message in this case.
This also fixes setting times on symlinks on Windows which wasn't
implemented for atime, mtime and was incorrectly setting the target of
the symlink for btime.
See: https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
Before this change, if writing to a local backend with --metadata and
--links, if the incoming metadata contained mode or ownership
information then rclone would apply the mode/ownership to the
destination of the link not the link itself.
This fixes the problem by using the link safe sycall variants
lchown/fchmodat when --links and --metadata is in use. Note that Linux
does not support setting permissions on symlinks, so rclone emits a
debug message in this case.
This also fixes setting times on symlinks on Windows which wasn't
implemented for atime, mtime and was incorrectly setting the target of
the symlink for btime.
See: https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
We changed the precision of the onedrive personal backend in
c053429b9c from 1mS to 1S.
However the tests did not get updated. This changes the time tests to
use `fstest.AssertTimeEqualWithPrecision` which compares with
precision so hopefully won't break again.
Before this change, upgrading to v1.13.7 caused a deadlock in the tests.
This was caused by additional locking in the sftp package exposing a
bad choice by the rclone code.
See https://github.com/pkg/sftp/issues/603 and thanks to @puellanivis
for the fix suggestion.
Currently rclone allows us to specify the path to a public ssh
certificate file.
That works great for cases where we can specify key path, like local
envs.
If users are using rclone with [volsync](https://github.com/backube/volsync/tree/main/docs/usage/rclone)
there currently is a limitation that users can specify only the rclone config file.
With this change users can pass the public certificate in the same fashion
as they can with `key_file`.
Like some other S3-compatible providers, Storj does not currently
implements UploadPartCopy and returns NotImplemented errors for
multi-part server side copies.
This patch works around the problem by raising --s3-copy-cutoff for
Storj to the maximum. This means that rclone will never use
multi-part copies for files in Storj. This includes files larger than
5GB which (according to AWS documentation) must be copied with
multi-part copy. This works fine for Storj.
See https://github.com/storj/roadmap/issues/40
This reduces the precision advertised by the backend from 1ms to 1s
for OneDrive personal accounts.
The precision was set to 1ms as part of:
1473de3f04 onedrive: add metadata support
which was released in v1.66.0.
However it appears not all OneDrive personal accounts support 1ms time
precision and that Microsoft may be migrating accounts away from this
to backends which only support 1s precision.
Fixes#8101
Before this change, server-side copying a src file over a dst that already exists
gave `Error "item_name_in_use" (409): Item with the same name already exists`.
This change fixes the error by copying to a temporary name first, then moving it
to the real name.
There might be a more graceful way to overwrite a file during a copy, but I
didn't see one in the API docs.
https://developer.box.com/reference/post-files-id-copy/
In the meantime, this workaround is better than a critical error.
This should (hopefully) fix 8 bisync integration tests.
Previously, cid/gcid (custom hash for pikpak) calculations failed when
attempting to unwrap object info from `fs.OverrideRemote`.
This commit introduces a new function that can correctly unwrap
object info from both regular objects and `fs.OverrideRemote` types,
ensuring uploads with accurate cid/gcid calculations in all scenarios.
Before this change, --copy-links erroneously behaved like --links when using cloning
on macOS, and cloning was not supported at all when using --links.
After this change, --copy-links does what it's supposed to, and takes advantage of
cloning when possible, by copying the file being linked to instead of the link
itself.
Cloning is now also supported in --links mode for regular files (which benefit
most from cloning). symlinks in --links mode continue to be tossed back to be
handled by rclone's special translation logic.
See https://forum.rclone.org/t/macos-local-to-local-copy-with-copy-links-causes-error/47671/5?u=nielash
This will ensure no Content-Md5 headers are sent and ensure ETags are not
interpreted as MD5 sums. X-Amz-Meta-Md5chksum will be set on all objects
whether single or multipart uploaded.
This also sets "no_check_bucket = true".
This is enough to make the integration tests pass, but there are some
limitations as noted in the docs.
See: https://forum.rclone.org/t/support-s3-directory-bucket/47653/
This change removes redundant calls to the Proton Drive Bridge when
creating Objects. Specifically, the function List() would get a
directory listing, get a link for each file, construct a remote path
from that link, then get a link for that remote path again by calling
getObjectLink() unnecessarily. This change removes that unnecessary
call, and tidies up a couple of functions around this with unused
parameters.
Related to performance issues reported in #7322 and #7413
This addresses the login issue caused by pikpak's recent cancellation
of existing login methods and requirement for additional verifications.
To resolve this, we've made the following changes:
1. Similar to lib/oauthutil, we've integrated a mechanism to handle
captcha tokens.
2. A new pikpakClient has been introduced to wrap the existing
rest.Client and incorporate the necessary headers including
x-captcha-token for each request.
3. Several options have been added/removed to support persistent
user/client identification.
* client_id: No longer configurable.
* client_secret: Deprecated as it's no longer used.
* user_agent: A new option that defaults to PC/Firefox's user agent
but can be overridden using the --pikpak-user-agent flag.
* device_id: A new option that is randomly generated if invalid.
It is recommended not to delete or change it frequently.
* captcha_token: A new option that is automatically managed
by rclone, similar to the OAuth token.
Fixes#7950#8005