This will ensure no Content-Md5 headers are sent and ensure ETags are not
interpreted as MD5 sums. X-Amz-Meta-Md5chksum will be set on all objects
whether single or multipart uploaded.
This also sets "no_check_bucket = true".
This is enough to make the integration tests pass, but there are some
limitations as noted in the docs.
See: https://forum.rclone.org/t/support-s3-directory-bucket/47653/
This change removes redundant calls to the Proton Drive Bridge when
creating Objects. Specifically, the function List() would get a
directory listing, get a link for each file, construct a remote path
from that link, then get a link for that remote path again by calling
getObjectLink() unnecessarily. This change removes that unnecessary
call, and tidies up a couple of functions around this with unused
parameters.
Related to performance issues reported in #7322 and #7413
This addresses the login issue caused by pikpak's recent cancellation
of existing login methods and requirement for additional verifications.
To resolve this, we've made the following changes:
1. Similar to lib/oauthutil, we've integrated a mechanism to handle
captcha tokens.
2. A new pikpakClient has been introduced to wrap the existing
rest.Client and incorporate the necessary headers including
x-captcha-token for each request.
3. Several options have been added/removed to support persistent
user/client identification.
* client_id: No longer configurable.
* client_secret: Deprecated as it's no longer used.
* user_agent: A new option that defaults to PC/Firefox's user agent
but can be overridden using the --pikpak-user-agent flag.
* device_id: A new option that is randomly generated if invalid.
It is recommended not to delete or change it frequently.
* captcha_token: A new option that is automatically managed
by rclone, similar to the OAuth token.
Fixes#7950#8005
When uploading chunked files to nextcloud, it gives a 423 error while
it is merging files.
This waits for an exponentially increasing amount of time for it to
clear.
If after we have received a 423 error we receive a 404 error then we
assume all is good as this is what appears to happen in practice.
Fixes#7109
The SDKv2 conversion introduced a regression to do with setting
credentials with env_auth=true. The rclone documentation explicitly
states that env_auth only applies if secret_access_key and
access_key_id are blank and users had been relying on that.
However after the SDKv2 conversion we were ignoring static credentials
if env_auth=true.
This fixes the problem by ignoring env_auth=true if secret_access_key
and access_key_id are both provided. This brings rclone back into line
with the documentation and users expectations.
Fixes#8067
After the config re-organisation, the setting of stringArray config
values (eg `--exclude` set with `RCLONE_EXCLUDE`) was broken and gave
a message like this for `RCLONE_EXCLUDE=*.jpg`:
Failed to load "filter" default values: failed to initialise "filter" options:
couldn't parse config item "exclude" = "*.jpg" as []string: parsing "*.jpg" as []string failed:
invalid character '/' looking for beginning of value
This was caused by the parser trying to parse the input string as a
JSON value.
When the config was re-organised it was thought that the internal
representation of stringArray values was not important as it was never
visible externally, however this turned out not to be true.
A defined representation was chosen - a comma separated string and
this was documented and tests were introduced in this patch.
This potentially introduces a very small backwards incompatibility. In
rclone v1.67.0
RCLONE_EXCLUDE=a,b
Would be interpreted as
--exclude "a,b"
Whereas this new code will interpret it as
--exclude "a" --exclude "b"
The benefit of being able to set multiple values with an environment
variable was deemed to outweigh the very small backwards compatibility
risk.
If a value with a `,` is needed, then use CSV escaping, eg
RCLONE_EXCLUDE="a,b"
(Note this needs to have the quotes in so at the unix shell that would be
RCLONE_EXCLUDE='"a,b"'
Fixes#8063
Before this fix, we initialised the options blocks in a random order.
This meant that there was a 50/50 chance whether --dump filters would
show the filters or not as it was depending on the "main" block having
being read first to set the Dump flags.
This initialises the options blocks in a defined order which is
alphabetically but with main first which fixes the problem.
The upload routine no longer returns a url to download the object.
This fixes the problem by fetching it if necessary when we attempt to
Open the object.
For some reason the parent ID got out of date in the Object (exact
reason not known - but the fact that this was OK before suggests a
change in the provider).
However we know the parent ID as it is in the directory cache, so use
that instead.
This gives the error
> Update second step failed: Linkbox error 500: The file name needs to include a suffix, such as xxx.mp4
As linkbox can't have files starting with "." and we are trying to save a file called ".ignore".
The server side move had a combination of bugs
- Fichier changed the API disallowing a move to the same name
- Rclone was using the wrong object for some operations
The original problem was introduced here
bcdfad3c83 build: update logging statements to make json log work #6038
And this was fixed non-optimally here
f1a84d171e build: fix build after update
It fails to build on plan9, which is part of the rclone CI matrix, and
the PR fixing it upstream doesn't seem to be getting traction.
Stub it on our side, we can still remove this once it gets merged.
Instead of the listening addresses specified above, rclone will listen to all
FDs passed by the service manager, if any (and ignore any arguments passed by
`--{{ .Prefix }}addr`.
This allows rclone to be a socket-activated service. It can be configured as described in
https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html
It's possible to test this interactively through `systemd-socket-activate`,
firing of a request in a second terminal:
```
❯ systemd-socket-activate -l 8088 -l 8089 --fdname=foo:bar -- ./rclone serve webdav :local:test/
Listening on [::]:8088 as 3.
Listening on [::]:8089 as 4.
Communication attempt on fd 3.
Execing ./rclone (./rclone serve webdav :local:test/)
2024/04/24 18:14:42 NOTICE: Local file system at /home/flokli/dev/flokli/rclone/test: WebDav Server started on [sd-listen:bar-0/ sd-listen:foo-0/]
```
This changes log statements from log to fs package, which is required for --use-json-log
to properly make log output in JSON format. The recently added custom linting rule,
handled by ruleguard via gocritic via golangci-lint, warns about these and suggests
the alternative. Fixing was therefore basically running "golangci-lint run --fix",
although some manual fixup of mainly imports are necessary following that.