Currently rclone allows us to specify the path to a public ssh
certificate file.
That works great for cases where we can specify key path, like local
envs.
If users are using rclone with [volsync](https://github.com/backube/volsync/tree/main/docs/usage/rclone)
there currently is a limitation that users can specify only the rclone config file.
With this change users can pass the public certificate in the same fashion
as they can with `key_file`.
Disabling the authentication for unix sockets makes it impossible to
use `rclone serve` behind a proxy that that communicates with rclone
via a unix socket.
Re-enabling the authentication should not have any effect on most
users of unix sockets as they do not set authentication up with a unix
socket anyway.
Like some other S3-compatible providers, Storj does not currently
implements UploadPartCopy and returns NotImplemented errors for
multi-part server side copies.
This patch works around the problem by raising --s3-copy-cutoff for
Storj to the maximum. This means that rclone will never use
multi-part copies for files in Storj. This includes files larger than
5GB which (according to AWS documentation) must be copied with
multi-part copy. This works fine for Storj.
See https://github.com/storj/roadmap/issues/40
Before this testing any backend which implemented the OpenChunkWriter
gave this error:
ERROR : writer-at-subdir/writer-at-file: Don't know how to set key "chunkSize" on upload
This was due to the ChunkOption incorrectly rendering into HTTP
headers which weren't understood by the backend.
Currently input options are retrieved from the event payload, via github.event.inputs,
and that still works, but boolean values are represented as strings there while in the
dedicated inputs context the boolean types are preserved, which means conditional
expressions can be simplified.
fs.CountError is called when an error is encountered. The method was
calling GlobalStats().Error(err) which incremented the error at the
global stats level. This led to calls to core/stats with group= filter
returning an error count of 0 even if errors actually occured.
This change requires the context to be provided when calling
fs.CountError. Doing so, we can retrieve the correct StatsInfo to
increment the errors from.
Fixes#5865
This reduces the precision advertised by the backend from 1ms to 1s
for OneDrive personal accounts.
The precision was set to 1ms as part of:
1473de3f04 onedrive: add metadata support
which was released in v1.66.0.
However it appears not all OneDrive personal accounts support 1ms time
precision and that Microsoft may be migrating accounts away from this
to backends which only support 1s precision.
Fixes#8101
Some backends support hashes but allow them to be blank. In other words, we
can't expect them to be reliably non-blank, and we shouldn't treat a blank hash
as an error.
Before this change, the bisync integration tests errored if a backend said it
supported hashes but in fact sometimes lacked them. After this change, such
errors are ignored.
Before this change, server-side copying a src file over a dst that already exists
gave `Error "item_name_in_use" (409): Item with the same name already exists`.
This change fixes the error by copying to a temporary name first, then moving it
to the real name.
There might be a more graceful way to overwrite a file during a copy, but I
didn't see one in the API docs.
https://developer.box.com/reference/post-files-id-copy/
In the meantime, this workaround is better than a critical error.
This should (hopefully) fix 8 bisync integration tests.
Before this change, when cache.GetFn was called on a file rather than a
directory, two cache entries would be added (the file + its parent) but only one
of them would get pinned if the caller then called Pin(f). This left the other
one exposed to expiration if the ci.FsCacheExpireDuration was reached. This was
problematic because both entries point to the same Fs, and if one entry expires
while the other is pinned, the Shutdown method gets erroneously called on an Fs
that is still in use.
An example of the problem showed up in the Hasher backend, which uses the
Shutdown method to stop the bolt db used to store hashes. If a command was run
on a Hasher file (ex. `rclone md5sum --download hasher:somelargefile.zip`) and
hashing the file took longer than the --fs-cache-expire-duration (5m by default), the
bolt db was stopped before the hashing operation completed, resulting in an
error.
This change fixes the issue by ensuring that:
1. only one entry is added to the cache (the file's parent, not the file).
2. future lookups correctly find the entry regardless of whether they are called
with the parent name or one of its children.
3. fs.ErrorIsFile is returned when (and only when) fsString points to a file
(preserving the fix from 8d5bc7f28b).
Note that f.Root() should always point to the parent dir as of c69eb84573