fs.CountError is called when an error is encountered. The method was
calling GlobalStats().Error(err) which incremented the error at the
global stats level. This led to calls to core/stats with group= filter
returning an error count of 0 even if errors actually occured.
This change requires the context to be provided when calling
fs.CountError. Doing so, we can retrieve the correct StatsInfo to
increment the errors from.
Fixes#5865
This reduces the precision advertised by the backend from 1ms to 1s
for OneDrive personal accounts.
The precision was set to 1ms as part of:
1473de3f04 onedrive: add metadata support
which was released in v1.66.0.
However it appears not all OneDrive personal accounts support 1ms time
precision and that Microsoft may be migrating accounts away from this
to backends which only support 1s precision.
Fixes#8101
Some backends support hashes but allow them to be blank. In other words, we
can't expect them to be reliably non-blank, and we shouldn't treat a blank hash
as an error.
Before this change, the bisync integration tests errored if a backend said it
supported hashes but in fact sometimes lacked them. After this change, such
errors are ignored.
Before this change, server-side copying a src file over a dst that already exists
gave `Error "item_name_in_use" (409): Item with the same name already exists`.
This change fixes the error by copying to a temporary name first, then moving it
to the real name.
There might be a more graceful way to overwrite a file during a copy, but I
didn't see one in the API docs.
https://developer.box.com/reference/post-files-id-copy/
In the meantime, this workaround is better than a critical error.
This should (hopefully) fix 8 bisync integration tests.
Before this change, when cache.GetFn was called on a file rather than a
directory, two cache entries would be added (the file + its parent) but only one
of them would get pinned if the caller then called Pin(f). This left the other
one exposed to expiration if the ci.FsCacheExpireDuration was reached. This was
problematic because both entries point to the same Fs, and if one entry expires
while the other is pinned, the Shutdown method gets erroneously called on an Fs
that is still in use.
An example of the problem showed up in the Hasher backend, which uses the
Shutdown method to stop the bolt db used to store hashes. If a command was run
on a Hasher file (ex. `rclone md5sum --download hasher:somelargefile.zip`) and
hashing the file took longer than the --fs-cache-expire-duration (5m by default), the
bolt db was stopped before the hashing operation completed, resulting in an
error.
This change fixes the issue by ensuring that:
1. only one entry is added to the cache (the file's parent, not the file).
2. future lookups correctly find the entry regardless of whether they are called
with the parent name or one of its children.
3. fs.ErrorIsFile is returned when (and only when) fsString points to a file
(preserving the fix from 8d5bc7f28b).
Note that f.Root() should always point to the parent dir as of c69eb84573
This commit makes the `commanddocs` and `backenddocs` fail if they
accidentally create a directory named '$HOME'. This is basically a
regression test for issue #8092.
It also makes those recipes rmdir the '$HOME/.config/rclone/'
directories. This will only delete empty directories, so nothing of
value should ever be deleted.
Previously, cid/gcid (custom hash for pikpak) calculations failed when
attempting to unwrap object info from `fs.OverrideRemote`.
This commit introduces a new function that can correctly unwrap
object info from both regular objects and `fs.OverrideRemote` types,
ensuring uploads with accurate cid/gcid calculations in all scenarios.
Before this change, --copy-links erroneously behaved like --links when using cloning
on macOS, and cloning was not supported at all when using --links.
After this change, --copy-links does what it's supposed to, and takes advantage of
cloning when possible, by copying the file being linked to instead of the link
itself.
Cloning is now also supported in --links mode for regular files (which benefit
most from cloning). symlinks in --links mode continue to be tossed back to be
handled by rclone's special translation logic.
See https://forum.rclone.org/t/macos-local-to-local-copy-with-copy-links-causes-error/47671/5?u=nielash
This will ensure no Content-Md5 headers are sent and ensure ETags are not
interpreted as MD5 sums. X-Amz-Meta-Md5chksum will be set on all objects
whether single or multipart uploaded.
This also sets "no_check_bucket = true".
This is enough to make the integration tests pass, but there are some
limitations as noted in the docs.
See: https://forum.rclone.org/t/support-s3-directory-bucket/47653/
This change removes redundant calls to the Proton Drive Bridge when
creating Objects. Specifically, the function List() would get a
directory listing, get a link for each file, construct a remote path
from that link, then get a link for that remote path again by calling
getObjectLink() unnecessarily. This change removes that unnecessary
call, and tidies up a couple of functions around this with unused
parameters.
Related to performance issues reported in #7322 and #7413