This addresses the login issue caused by pikpak's recent cancellation
of existing login methods and requirement for additional verifications.
To resolve this, we've made the following changes:
1. Similar to lib/oauthutil, we've integrated a mechanism to handle
captcha tokens.
2. A new pikpakClient has been introduced to wrap the existing
rest.Client and incorporate the necessary headers including
x-captcha-token for each request.
3. Several options have been added/removed to support persistent
user/client identification.
* client_id: No longer configurable.
* client_secret: Deprecated as it's no longer used.
* user_agent: A new option that defaults to PC/Firefox's user agent
but can be overridden using the --pikpak-user-agent flag.
* device_id: A new option that is randomly generated if invalid.
It is recommended not to delete or change it frequently.
* captcha_token: A new option that is automatically managed
by rclone, similar to the OAuth token.
Fixes#7950#8005
When uploading chunked files to nextcloud, it gives a 423 error while
it is merging files.
This waits for an exponentially increasing amount of time for it to
clear.
If after we have received a 423 error we receive a 404 error then we
assume all is good as this is what appears to happen in practice.
Fixes#7109
The SDKv2 conversion introduced a regression to do with setting
credentials with env_auth=true. The rclone documentation explicitly
states that env_auth only applies if secret_access_key and
access_key_id are blank and users had been relying on that.
However after the SDKv2 conversion we were ignoring static credentials
if env_auth=true.
This fixes the problem by ignoring env_auth=true if secret_access_key
and access_key_id are both provided. This brings rclone back into line
with the documentation and users expectations.
Fixes#8067
The upload routine no longer returns a url to download the object.
This fixes the problem by fetching it if necessary when we attempt to
Open the object.
For some reason the parent ID got out of date in the Object (exact
reason not known - but the fact that this was OK before suggests a
change in the provider).
However we know the parent ID as it is in the directory cache, so use
that instead.
The server side move had a combination of bugs
- Fichier changed the API disallowing a move to the same name
- Rclone was using the wrong object for some operations
This changes log statements from log to fs package, which is required for --use-json-log
to properly make log output in JSON format. The recently added custom linting rule,
handled by ruleguard via gocritic via golangci-lint, warns about these and suggests
the alternative. Fixing was therefore basically running "golangci-lint run --fix",
although some manual fixup of mainly imports are necessary following that.
With the enhancement in version v2.0.3 of ncw/swift library, we can now get Total and Free space info from remotes that support this feature (ex. Blomp storage)
We upgraded our minimum Go version in commit ca24447090. We can now use
the built-in `min` and `max` functions directly.
Reference: https://go.dev/ref/spec#Min_and_max
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
This flag allows users to disable the reflink cloning feature and instead force
"deep" copies, for certain use cases where data redundancy is preferable. It is
functionally equivalent to using `--disable Copy` on local.
Before this change, macOS-specific metadata was not preserved by rclone, even for
local-to-local transfers (it does not use the "user." prefix, nor is Mac metadata
limited to xattrs.) Additionally, rclone did not take advantage of APFS's native
"cloning" functionality for fast and deduplicated transfers.
After this change, local (on macOS only) supports "server-side copy" similarly to
other remotes, and achieves this by using (when possible) macOS's native APFS
"cloning", which is the same underlying mechanism deployed when a user
duplicates a file via the Finder UI. This has several advantages over the
previous behavior:
- It is extremely fast (even large files can be cloned instantly)
- It is very efficient in terms of storage, as it automatically deduplicates when
possible (i.e. so that having two identical files does not consume more storage
than having just one.) (The concept is similar to a "hard link", but subsequent
modifications will not affect the original file.)
- It preserves Mac-specific metadata to the maximum degree, including not only
xattrs but also metadata not easily settable by other methods, including Finder
and Spotlight params.
When server-side "clone" is not available (for example, on non-APFS volumes), it
falls back to server-side "copy" (still preserving metadata but using more disk
storage.) It is only used when both remotes are local (and not wrapped by other
remotes, such as crypt.) The behavior of local on non-mac systems is unchanged.
This adds a new optional parameter to the backend, to specify a path
to a unix domain socket to connect to, instead the specified URL.
The URL itself is still used for the rest of the HTTP client, allowing
host and subpath to stay intact.
This allows using rclone with the webdav backend to connect to a WebDAV
server provided at a Unix Domain socket:
rclone serve webdav --addr unix:///tmp/my.socket remote:path
rclone --webdav-unix-socket /tmp/my.socket --webdav-url http://localhost lsf :webdav:
This converts the ChunkedReader into an interface and provides two
implementations one sequential and one parallel.
This can be used to improve the performance of the VFS on high
bandwidth or high latency links.
Fixes#4760
There were a lot of instances of this lint error
printf: non-constant format string in call to github.com/rclone/rclone/fs.Logf (govet)
Which were fixed by re-arranging the arguments and adding "%s".
There were quite a few genuine bugs which were found too.
When copying Google Docs to Backblaze B2 errors like this would happen
ERROR : test.docx: Failed to calculate src hash: hash type not supported
ERROR : test.docx: corrupted on transfer: sha1 hashes differ src
This was due to an oversight in
8fd66daab6 drive: add support of SHA-1 and SHA-256 checksum
Which omitted to change the base object (which includes Google Docs) so
that it supported SHA-1 and SHA-256.