fs.CountError is called when an error is encountered. The method was
calling GlobalStats().Error(err) which incremented the error at the
global stats level. This led to calls to core/stats with group= filter
returning an error count of 0 even if errors actually occured.
This change requires the context to be provided when calling
fs.CountError. Doing so, we can retrieve the correct StatsInfo to
increment the errors from.
Fixes#5865
This reduces the precision advertised by the backend from 1ms to 1s
for OneDrive personal accounts.
The precision was set to 1ms as part of:
1473de3f04 onedrive: add metadata support
which was released in v1.66.0.
However it appears not all OneDrive personal accounts support 1ms time
precision and that Microsoft may be migrating accounts away from this
to backends which only support 1s precision.
Fixes#8101
Some backends support hashes but allow them to be blank. In other words, we
can't expect them to be reliably non-blank, and we shouldn't treat a blank hash
as an error.
Before this change, the bisync integration tests errored if a backend said it
supported hashes but in fact sometimes lacked them. After this change, such
errors are ignored.
Before this change, server-side copying a src file over a dst that already exists
gave `Error "item_name_in_use" (409): Item with the same name already exists`.
This change fixes the error by copying to a temporary name first, then moving it
to the real name.
There might be a more graceful way to overwrite a file during a copy, but I
didn't see one in the API docs.
https://developer.box.com/reference/post-files-id-copy/
In the meantime, this workaround is better than a critical error.
This should (hopefully) fix 8 bisync integration tests.
Before this change, when cache.GetFn was called on a file rather than a
directory, two cache entries would be added (the file + its parent) but only one
of them would get pinned if the caller then called Pin(f). This left the other
one exposed to expiration if the ci.FsCacheExpireDuration was reached. This was
problematic because both entries point to the same Fs, and if one entry expires
while the other is pinned, the Shutdown method gets erroneously called on an Fs
that is still in use.
An example of the problem showed up in the Hasher backend, which uses the
Shutdown method to stop the bolt db used to store hashes. If a command was run
on a Hasher file (ex. `rclone md5sum --download hasher:somelargefile.zip`) and
hashing the file took longer than the --fs-cache-expire-duration (5m by default), the
bolt db was stopped before the hashing operation completed, resulting in an
error.
This change fixes the issue by ensuring that:
1. only one entry is added to the cache (the file's parent, not the file).
2. future lookups correctly find the entry regardless of whether they are called
with the parent name or one of its children.
3. fs.ErrorIsFile is returned when (and only when) fsString points to a file
(preserving the fix from 8d5bc7f28b).
Note that f.Root() should always point to the parent dir as of c69eb84573
This commit makes the `commanddocs` and `backenddocs` fail if they
accidentally create a directory named '$HOME'. This is basically a
regression test for issue #8092.
It also makes those recipes rmdir the '$HOME/.config/rclone/'
directories. This will only delete empty directories, so nothing of
value should ever be deleted.
Previously, cid/gcid (custom hash for pikpak) calculations failed when
attempting to unwrap object info from `fs.OverrideRemote`.
This commit introduces a new function that can correctly unwrap
object info from both regular objects and `fs.OverrideRemote` types,
ensuring uploads with accurate cid/gcid calculations in all scenarios.
Before this change, --copy-links erroneously behaved like --links when using cloning
on macOS, and cloning was not supported at all when using --links.
After this change, --copy-links does what it's supposed to, and takes advantage of
cloning when possible, by copying the file being linked to instead of the link
itself.
Cloning is now also supported in --links mode for regular files (which benefit
most from cloning). symlinks in --links mode continue to be tossed back to be
handled by rclone's special translation logic.
See https://forum.rclone.org/t/macos-local-to-local-copy-with-copy-links-causes-error/47671/5?u=nielash
This will ensure no Content-Md5 headers are sent and ensure ETags are not
interpreted as MD5 sums. X-Amz-Meta-Md5chksum will be set on all objects
whether single or multipart uploaded.
This also sets "no_check_bucket = true".
This is enough to make the integration tests pass, but there are some
limitations as noted in the docs.
See: https://forum.rclone.org/t/support-s3-directory-bucket/47653/
This change removes redundant calls to the Proton Drive Bridge when
creating Objects. Specifically, the function List() would get a
directory listing, get a link for each file, construct a remote path
from that link, then get a link for that remote path again by calling
getObjectLink() unnecessarily. This change removes that unnecessary
call, and tidies up a couple of functions around this with unused
parameters.
Related to performance issues reported in #7322 and #7413
This addresses the login issue caused by pikpak's recent cancellation
of existing login methods and requirement for additional verifications.
To resolve this, we've made the following changes:
1. Similar to lib/oauthutil, we've integrated a mechanism to handle
captcha tokens.
2. A new pikpakClient has been introduced to wrap the existing
rest.Client and incorporate the necessary headers including
x-captcha-token for each request.
3. Several options have been added/removed to support persistent
user/client identification.
* client_id: No longer configurable.
* client_secret: Deprecated as it's no longer used.
* user_agent: A new option that defaults to PC/Firefox's user agent
but can be overridden using the --pikpak-user-agent flag.
* device_id: A new option that is randomly generated if invalid.
It is recommended not to delete or change it frequently.
* captcha_token: A new option that is automatically managed
by rclone, similar to the OAuth token.
Fixes#7950#8005
When uploading chunked files to nextcloud, it gives a 423 error while
it is merging files.
This waits for an exponentially increasing amount of time for it to
clear.
If after we have received a 423 error we receive a 404 error then we
assume all is good as this is what appears to happen in practice.
Fixes#7109