Commit Graph

2139 Commits

Author SHA1 Message Date
Manoj Ghosh
ba8e538173 oracleobjectstorage: make specifying compartmentid optional 2024-12-03 17:54:00 +00:00
Georg Welzel
40111ba5e1
plcoud: fix failing large file uploads - fixes #8147
This changes the OpenWriterAt implementation to make client/fd
handling atomic.

This PR stabilizes the situation of bigger files and multi-threaded
uploads. The root cause boils down to the old "fun" property of
pclouds fileops API: sessions are bound to TCP connections. This
forces us to use a http client with only a single connection
underneath.

With large files, we reuse the same connection for each chunk. If that
connection interrupts (e.g. because we are talking through the
internet), all chunks will fail. The probability for latter one
increases with larger files.

As the point of the whole multi-threaded feature was to speed-up large
files in the first place, this change pulls the client creation (and
hence connection handling) into each chunk. This should stabilize the
situation, as each chunk (and retry) gets its own connection.
2024-12-03 17:52:44 +00:00
Nick Craig-Wood
964fcd5f59 s3: fix multitenant multipart uploads with CEPH
CEPH uses a special bucket form `tenant:bucket` for multitentant
access using S3 as documented here:

https://docs.ceph.com/en/reef/radosgw/multitenancy/#s3

However when doing multipart uploads, in the reply from
`CreateMultipart` the `tenant:` was missing from the `Bucket` response
rclone was using to build the `UploadPart` request. This caused a 404
failure return. This may be a CEPH bug, but it is easy to work around.

This changes the code to use the `Bucket` and `Key` that we used in
`CreateMultipart` in `UploadPart` rather than the one returned from
`CreateMultipart` which fixes the problem.

See: https://forum.rclone.org/t/rclone-zcat-does-not-work-with-a-multitenant-ceph-backend/48618
2024-11-21 11:04:49 +00:00
Nick Craig-Wood
6079cab090 s3: fix download of compressed files from Cloudflare R2 - fixes #8137
Before this change attempting to download a file with
`Content-Encoding: gzip` from Cloudflare R2 gave this error

    corrupted on transfer: sizes differ src 0 vs dst 999

This was caused by the SDK v2 overriding our attempt to set
`Accept-Encoding: gzip`.

This fixes the problem by disabling the middleware that does that
overriding.
2024-11-20 12:08:23 +00:00
Nick Craig-Wood
bf57087a6e s3: fix testing tiers which don't exist except on AWS 2024-11-20 12:08:23 +00:00
Nick Craig-Wood
01ccf204f4 local: fix permission and ownership on symlinks with --links and --metadata
Before this change, if writing to a local backend with --metadata and
--links, if the incoming metadata contained mode or ownership
information then rclone would apply the mode/ownership to the
destination of the link not the link itself.

This fixes the problem by using the link safe sycall variants
lchown/fchmodat when --links and --metadata is in use. Note that Linux
does not support setting permissions on symlinks, so rclone emits a
debug message in this case.

This also fixes setting times on symlinks on Windows which wasn't
implemented for atime, mtime and was incorrectly setting the target of
the symlink for btime.

See: https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
2024-11-14 16:20:18 +00:00
Nick Craig-Wood
84b64dcdf9 Revert "Merge commit from fork"
This reverts commit 1e2b354456.
2024-11-14 16:20:06 +00:00
Nick Craig-Wood
1e2b354456
Merge commit from fork
Before this change, if writing to a local backend with --metadata and
--links, if the incoming metadata contained mode or ownership
information then rclone would apply the mode/ownership to the
destination of the link not the link itself.

This fixes the problem by using the link safe sycall variants
lchown/fchmodat when --links and --metadata is in use. Note that Linux
does not support setting permissions on symlinks, so rclone emits a
debug message in this case.

This also fixes setting times on symlinks on Windows which wasn't
implemented for atime, mtime and was incorrectly setting the target of
the symlink for btime.

See: https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
2024-11-14 16:13:57 +00:00
Nick Craig-Wood
f639cd9c78 onedrive: fix integration tests after precision change
We changed the precision of the onedrive personal backend in
c053429b9c from 1mS to 1S.

However the tests did not get updated. This changes the time tests to
use `fstest.AssertTimeEqualWithPrecision` which compares with
precision so hopefully won't break again.
2024-11-12 13:09:15 +00:00
Nick Craig-Wood
173b2ac956 serve sftp: update github.com/pkg/sftp to v1.13.7 and fix deadlock in tests
Before this change, upgrading to v1.13.7 caused a deadlock in the tests.

This was caused by additional locking in the sftp package exposing a
bad choice by the rclone code.

See https://github.com/pkg/sftp/issues/603 and thanks to @puellanivis
for the fix suggestion.
2024-11-11 18:15:00 +00:00
Nick Craig-Wood
1317fdb9b8 build: fix comments after golangci-lint upgrade 2024-11-11 18:03:36 +00:00
Nick Craig-Wood
ee72554fb9 pikpak: fix fatal crash on startup with token that can't be refreshed 2024-11-08 19:34:09 +00:00
Nick Craig-Wood
abb4f77568 yandex: fix server side copying over existing object
This was causing a conflict error. This was fixed by renaming the
existing file first and if the copy was successful deleting it, or
renaming it back.
2024-11-08 18:17:55 +00:00
Nick Craig-Wood
ca2b27422f sugarsync: fix server side copying over existing object
This was causing a conflict error. This was fixed by renaming the
existing file first and if the copy was successful deleting it, or
renaming it back.
2024-11-08 18:17:55 +00:00
Nick Craig-Wood
740f6b318c putio: fix server side copying over existing object
This was causing a conflict error. This was fixed by checking for the
existing object and deleting it after the file was server side copied.
2024-11-08 18:17:55 +00:00
Nick Craig-Wood
f307d929a8 onedrive: fix server side copying over existing object
This was causing a conflict error. This was fixed by renaming the
existing file first and if the copy was successful deleting it, or
renaming it back.
2024-11-08 18:17:55 +00:00
Nick Craig-Wood
ceea6753ee dropbox: fix server side copying over existing object
This was causing a conflict error. This was fixed by renaming the
existing file first and if the copy was successful deleting it, or
renaming it back.
2024-11-08 18:17:55 +00:00
Nick Craig-Wood
3e14ba54b8 gofile: fix server side copying over existing object
This was creating a duplicate.
2024-11-08 14:01:51 +00:00
Dimitar Ivanov
40159e7a16
sftp: allow inline ssh public certificate for sftp
Currently rclone allows us to specify the path to a public ssh
certificate file.

That works great for cases where we can specify key path, like local
envs.

If users are using rclone with [volsync](https://github.com/backube/volsync/tree/main/docs/usage/rclone)
there currently is a limitation that users can specify only the rclone config file.
With this change users can pass the public certificate in the same fashion
as they can with `key_file`.
2024-10-25 10:40:57 +01:00
Nick Craig-Wood
175aa07cdd onedrive: fix Retry-After handling to look at 503 errors also
According to the Microsoft docs a Retry-After header can be returned
on 429 errors and 503 errors, but before this change we were only
checking for it on 429 errors.

See: https://forum.rclone.org/t/onedrive-503-response-retry-after-not-used/48045
2024-10-23 13:00:32 +01:00
Kaloyan Raev
75257fc9cd s3: Storj provider: fix server-side copy of files bigger than 5GB
Like some other S3-compatible providers, Storj does not currently
implements UploadPartCopy and returns NotImplemented errors for
multi-part server side copies.

This patch works around the problem by raising --s3-copy-cutoff for
Storj to the maximum. This means that rclone will never use
multi-part copies for files in Storj. This includes files larger than
5GB which (according to AWS documentation) must be copied with
multi-part copy. This works fine for Storj.

See https://github.com/storj/roadmap/issues/40
2024-10-22 21:15:04 +01:00
Nick Craig-Wood
53ff3b3b32 s3: add Selectel as a provider 2024-10-22 19:54:33 +01:00
Nick Craig-Wood
264c9fb2c0 drive: implement rclone backend rescue to rescue orphaned files
Fixes #4166
2024-10-21 10:15:01 +01:00
Diego Monti
a19ddffe92 s3: add Wasabi eu-south-1 region
Ref. https://docs.wasabi.com/docs/what-are-the-service-urls-for-wasabi-s-different-storage-regions
2024-10-14 14:05:33 +02:00
Nick Craig-Wood
1ca3f12672 s3: fix crash when using --s3-download-url after migration to SDKv2
Before this change rclone was crashing when the download URL did not
supply an X-Amz-Storage-Class header.

This change allows the header to be missing.

See: https://forum.rclone.org/t/sigsegv-on-ubuntu-24-04/48047
2024-10-03 14:31:56 +01:00
Matthias Gatto
9614fc60f2
s3: add Outscale provider
Signed-off-by: matthias.gatto <matthias.gatto@outscale.com>
Co-authored-by: André Tran <andre.tran@outscale.com>
2024-10-02 10:26:41 +01:00
lostb1t
51db76fd47 Add ICloud Drive backend 2024-10-02 10:19:11 +01:00
Noam Ross
17e7ccfad5 drive: add support for markdown format 2024-09-30 17:22:32 +01:00
Nick Craig-Wood
c053429b9c onedrive: fix time precision for OneDrive personal
This reduces the precision advertised by the backend from 1ms to 1s
for OneDrive personal accounts.

The precision was set to 1ms as part of:

1473de3f04 onedrive: add metadata support

which was released in v1.66.0.

However it appears not all OneDrive personal accounts support 1ms time
precision and that Microsoft may be migrating accounts away from this
to backends which only support 1s precision.

Fixes #8101
2024-09-30 11:34:06 +01:00
nielash
3c7ad8d961 box: fix server-side copying a file over existing dst - fixes #3511
Before this change, server-side copying a src file over a dst that already exists
gave `Error "item_name_in_use" (409): Item with the same name already exists`.

This change fixes the error by copying to a temporary name first, then moving it
to the real name.

There might be a more graceful way to overwrite a file during a copy, but I
didn't see one in the API docs.
https://developer.box.com/reference/post-files-id-copy/
In the meantime, this workaround is better than a critical error.

This should (hopefully) fix 8 bisync integration tests.
2024-09-29 18:37:52 -04:00
Leandro Piccilli
94997d25d2 gcs: add access token auth with --gcs-access-token 2024-09-27 17:37:07 +01:00
Nick Craig-Wood
22e13eea47 gphotos: implment --gphotos-proxy to allow download of full resolution media
This works in conjunction with the gphotosdl tool

https://github.com/rclone/gphotosdl
2024-09-26 12:57:28 +01:00
Nick Craig-Wood
de9b593f02 googlephotos: remove noisy debugging statements 2024-09-26 12:52:53 +01:00
Nick Craig-Wood
cb9f4f8461 build: replace "golang.org/x/exp/slices" with "slices" now go1.21 is required 2024-09-25 16:03:43 +01:00
wiserain
f8d782c02d
pikpak: fix cid/gcid calculations for fs.OverrideRemote
Previously, cid/gcid (custom hash for pikpak) calculations failed when 
attempting to unwrap object info from `fs.OverrideRemote`. 

This commit introduces a new function that can correctly unwrap 
object info from both regular objects and `fs.OverrideRemote` types, 
ensuring uploads with accurate cid/gcid calculations in all scenarios.
2024-09-21 10:22:31 +09:00
nielash
462a1cf491 local: fix --copy-links on macOS when cloning
Before this change, --copy-links erroneously behaved like --links when using cloning
on macOS, and cloning was not supported at all when using --links.

After this change, --copy-links does what it's supposed to, and takes advantage of
cloning when possible, by copying the file being linked to instead of the link
itself.

Cloning is now also supported in --links mode for regular files (which benefit
most from cloning). symlinks in --links mode continue to be tossed back to be
handled by rclone's special translation logic.

See https://forum.rclone.org/t/macos-local-to-local-copy-with-copy-links-causes-error/47671/5?u=nielash
2024-09-20 17:43:52 +01:00
Nick Craig-Wood
0b7b3cacdc azureblob: add --azureblob-use-az to force the use of the Azure CLI for auth
Setting this can be useful if you wish to use the az CLI on a host with
a System Managed Identity that you do not want to use.

Fixes #8078
2024-09-20 16:16:09 +01:00
Nick Craig-Wood
976103d50b azureblob: add --azureblob-disable-instance-discovery
If set this skips requesting Microsoft Entra instance metadata

See #8078
2024-09-20 16:16:09 +01:00
Nick Craig-Wood
192524c004 s3: add initial --s3-directory-bucket to support AWS Directory Buckets
This will ensure no Content-Md5 headers are sent and ensure ETags are not
interpreted as MD5 sums. X-Amz-Meta-Md5chksum will be set on all objects
whether single or multipart uploaded.

This also sets "no_check_bucket = true".

This is enough to make the integration tests pass, but there are some
limitations as noted in the docs.

See: https://forum.rclone.org/t/support-s3-directory-bucket/47653/
2024-09-19 12:01:24 +01:00
Lawrence Murray
c669f4e218
backend/protondrive: improve performance of Proton Drive backend
This change removes redundant calls to the Proton Drive Bridge when
creating Objects. Specifically, the function List() would get a
directory listing, get a link for each file, construct a remote path
from that link, then get a link for that remote path again by calling
getObjectLink() unnecessarily. This change removes that unnecessary
call, and tidies up a couple of functions around this with unused
parameters.

Related to performance issues reported in #7322 and #7413
2024-09-18 18:15:24 +01:00
Nick Craig-Wood
1a9e6a527d ftp: implement --ftp-no-check-upload to allow upload to write only dirs
Fixes #8079
2024-09-18 12:57:01 +01:00
buengese
a2a0388036 zoho: make upload cutoff configurable 2024-09-17 20:40:42 +01:00
buengese
48543d38e8 zoho: add support for private spaces 2024-09-17 20:40:42 +01:00
buengese
eceb390152 zoho: try to handle rate limits a bit better 2024-09-17 20:40:42 +01:00
buengese
f4deffdc96 zoho: print clear error message when missing oauth scope 2024-09-17 20:40:42 +01:00
buengese
c172742cef zoho: switch to large file upload API for larger files, fix missing URL encoding of filenames for the upload API 2024-09-17 20:40:42 +01:00
buengese
7daed30754 zoho: use download server to accelerate downloads
Co-authored-by: rishi.sridhar <rishi.sridhar@zohocorp.com>
2024-09-17 20:40:42 +01:00
quiescens
b1b4c7f27b
opendrive: add about support to backend 2024-09-17 17:20:42 +01:00
wiserain
ed84553dc1
pikpak: fix login issue where token retrieval fails
This addresses the login issue caused by pikpak's recent cancellation 
of existing login methods and requirement for additional verifications. 

To resolve this, we've made the following changes:

1. Similar to lib/oauthutil, we've integrated a mechanism to handle 
captcha tokens.

2. A new pikpakClient has been introduced to wrap the existing 
rest.Client and incorporate the necessary headers including 
x-captcha-token for each request.

3. Several options have been added/removed to support persistent 
user/client identification.

* client_id: No longer configurable.
* client_secret: Deprecated as it's no longer used.
* user_agent: A new option that defaults to PC/Firefox's user agent 
but can be overridden using the --pikpak-user-agent flag.
* device_id: A new option that is randomly generated if invalid. 
It is recommended not to delete or change it frequently.
* captcha_token: A new option that is automatically managed 
by rclone, similar to the OAuth token.

Fixes #7950 #8005
2024-09-18 01:09:21 +09:00
Nick Craig-Wood
c94edbb76b webdav: nextcloud: implement backoff and retry for 423 LOCKED errors
When uploading chunked files to nextcloud, it gives a 423 error while
it is merging files.

This waits for an exponentially increasing amount of time for it to
clear.

If after we have received a 423 error we receive a 404 error then we
assume all is good as this is what appears to happen in practice.

Fixes #7109
2024-09-17 16:46:02 +01:00