If you use the authentication method OAuth2.0 with JWT on the Box backend
rclone fails to refresh the token before Box expires it. If this happens
mid-transfer the transfer is aborted. This fix expires the tokens from
Box earlier (2 minutes) than expected.
Fixes#7214
CEPH uses a special bucket form `tenant:bucket` for multitentant
access using S3 as documented here:
https://docs.ceph.com/en/reef/radosgw/multitenancy/#s3
However when doing multipart uploads, in the reply from
`CreateMultipart` the `tenant:` was missing from the `Bucket` response
rclone was using to build the `UploadPart` request. This caused a 404
failure return. This may be a CEPH bug, but it is easy to work around.
This changes the code to use the `Bucket` and `Key` that we used in
`CreateMultipart` in `UploadPart` rather than the one returned from
`CreateMultipart` which fixes the problem.
See: https://forum.rclone.org/t/rclone-zcat-does-not-work-with-a-multitenant-ceph-backend/48618
Before this change, if writing to a local backend with --metadata and
--links, if the incoming metadata contained mode or ownership
information then rclone would apply the mode/ownership to the
destination of the link not the link itself.
This fixes the problem by using the link safe sycall variants
lchown/fchmodat when --links and --metadata is in use. Note that Linux
does not support setting permissions on symlinks, so rclone emits a
debug message in this case.
This also fixes setting times on symlinks on Windows which wasn't
implemented for atime, mtime and was incorrectly setting the target of
the symlink for btime.
See: https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
Before this change, if rclone is used as a library and logrus is used
after a call to rc `sync/bisync`, logging does not work anymore and
leads to writing to a closed pipe.
This change restores the output correctly.
Fixes#8158
Like some other S3-compatible providers, Storj does not currently
implements UploadPartCopy and returns NotImplemented errors for
multi-part server side copies.
This patch works around the problem by raising --s3-copy-cutoff for
Storj to the maximum. This means that rclone will never use
multi-part copies for files in Storj. This includes files larger than
5GB which (according to AWS documentation) must be copied with
multi-part copy. This works fine for Storj.
See https://github.com/storj/roadmap/issues/40
Previously, cid/gcid (custom hash for pikpak) calculations failed when
attempting to unwrap object info from `fs.OverrideRemote`.
This commit introduces a new function that can correctly unwrap
object info from both regular objects and `fs.OverrideRemote` types,
ensuring uploads with accurate cid/gcid calculations in all scenarios.
Before this change, --copy-links erroneously behaved like --links when using cloning
on macOS, and cloning was not supported at all when using --links.
After this change, --copy-links does what it's supposed to, and takes advantage of
cloning when possible, by copying the file being linked to instead of the link
itself.
Cloning is now also supported in --links mode for regular files (which benefit
most from cloning). symlinks in --links mode continue to be tossed back to be
handled by rclone's special translation logic.
See https://forum.rclone.org/t/macos-local-to-local-copy-with-copy-links-causes-error/47671/5?u=nielash
This addresses the login issue caused by pikpak's recent cancellation
of existing login methods and requirement for additional verifications.
To resolve this, we've made the following changes:
1. Similar to lib/oauthutil, we've integrated a mechanism to handle
captcha tokens.
2. A new pikpakClient has been introduced to wrap the existing
rest.Client and incorporate the necessary headers including
x-captcha-token for each request.
3. Several options have been added/removed to support persistent
user/client identification.
* client_id: No longer configurable.
* client_secret: Deprecated as it's no longer used.
* user_agent: A new option that defaults to PC/Firefox's user agent
but can be overridden using the --pikpak-user-agent flag.
* device_id: A new option that is randomly generated if invalid.
It is recommended not to delete or change it frequently.
* captcha_token: A new option that is automatically managed
by rclone, similar to the OAuth token.
Fixes#7950#8005
The SDKv2 conversion introduced a regression to do with setting
credentials with env_auth=true. The rclone documentation explicitly
states that env_auth only applies if secret_access_key and
access_key_id are blank and users had been relying on that.
However after the SDKv2 conversion we were ignoring static credentials
if env_auth=true.
This fixes the problem by ignoring env_auth=true if secret_access_key
and access_key_id are both provided. This brings rclone back into line
with the documentation and users expectations.
Fixes#8067
After the config re-organisation, the setting of stringArray config
values (eg `--exclude` set with `RCLONE_EXCLUDE`) was broken and gave
a message like this for `RCLONE_EXCLUDE=*.jpg`:
Failed to load "filter" default values: failed to initialise "filter" options:
couldn't parse config item "exclude" = "*.jpg" as []string: parsing "*.jpg" as []string failed:
invalid character '/' looking for beginning of value
This was caused by the parser trying to parse the input string as a
JSON value.
When the config was re-organised it was thought that the internal
representation of stringArray values was not important as it was never
visible externally, however this turned out not to be true.
A defined representation was chosen - a comma separated string and
this was documented and tests were introduced in this patch.
This potentially introduces a very small backwards incompatibility. In
rclone v1.67.0
RCLONE_EXCLUDE=a,b
Would be interpreted as
--exclude "a,b"
Whereas this new code will interpret it as
--exclude "a" --exclude "b"
The benefit of being able to set multiple values with an environment
variable was deemed to outweigh the very small backwards compatibility
risk.
If a value with a `,` is needed, then use CSV escaping, eg
RCLONE_EXCLUDE="a,b"
(Note this needs to have the quotes in so at the unix shell that would be
RCLONE_EXCLUDE='"a,b"'
Fixes#8063
Before this fix, we initialised the options blocks in a random order.
This meant that there was a 50/50 chance whether --dump filters would
show the filters or not as it was depending on the "main" block having
being read first to set the Dump flags.
This initialises the options blocks in a defined order which is
alphabetically but with main first which fixes the problem.
The upload routine no longer returns a url to download the object.
This fixes the problem by fetching it if necessary when we attempt to
Open the object.
For some reason the parent ID got out of date in the Object (exact
reason not known - but the fact that this was OK before suggests a
change in the provider).
However we know the parent ID as it is in the directory cache, so use
that instead.
This gives the error
> Update second step failed: Linkbox error 500: The file name needs to include a suffix, such as xxx.mp4
As linkbox can't have files starting with "." and we are trying to save a file called ".ignore".
The server side move had a combination of bugs
- Fichier changed the API disallowing a move to the same name
- Rclone was using the wrong object for some operations
The original problem was introduced here
bcdfad3c83 build: update logging statements to make json log work #6038
And this was fixed non-optimally here
f1a84d171e build: fix build after update