Before this change, some parts of operations called the Open method on
objects directly, and some called NewReOpen to make an object which
can re-open itself on errors.
This adds a new function operations.Open which should be called
instead of fs.Object.Open to open a reliable stream of data and
changes all call sites to use that.
This means `rclone check --download` and `rclone cat` will re-open
files on failures.
See: https://forum.rclone.org/t/does-rclone-support-retries-for-check-when-using-download-flag/38641
Before this change we tested special errors for straight equality.
This works for all normal backends, but the union backend may return
wrapped errors which contain the special error types.
In particular if a pcloud backend was part of a union when attempting
to set modification times the fs.ErrorCantSetModTime return wasn't
understood because it was wrapped in a union.Error.
This fixes the problem by using errors.Is instead in all the
comparisons in operations.
See: https://forum.rclone.org/t/failed-to-set-modification-time-1-error-pcloud-cant-set-modified-time/38596
Before this change the Errors type in the union backend produced
errors which could not be Unwrapped to test their type.
This adds the (go1.20) Unwrap method to the Errors type which allows
errors.Is to work on these errors.
It also adds unit tests for the Errors type and fixes a couple of
minor bugs thrown up in the process.
See: https://forum.rclone.org/t/failed-to-set-modification-time-1-error-pcloud-cant-set-modified-time/38596
Before this fix we used the bin/get-github-release.go script to
install nfpm.
However this script fails scraping the downloads page when the target
has more than a few download options. The alternative would be using
the GitHub API but this needs authentication so as not to be rate
limited on GitHub actions.
This patch switches over to go install which is less efficient but
should work in all circumstances.
Before this change if the VFS took more than 5 to initialise (which
can happen if there is a lot of files or a lot of files which need
uploading) the backend was dropped out of the cache before the VFS was
fully created.
This was noticeable in the dropbox backend where the batcher Shutdown
too soon and prevented further uploads.
This fixes the problem by Pinning backends before the VFS cache is
created.
https://forum.rclone.org/t/if-more-than-251-elements-in-the-que-to-upload-fails-with-batcher-is-shutting-down/38076/2
Set this automatically for any backend which implements UnWrap and
manually for combine and union which can't implement UnWrap but do
overlay other backends.
Before this change, the Pikpak backend would always download
the first media item whenever possible, regardless of whether
or not it was the original contents.
Now we check the validity of a media link using the `fid`
parameter in the link URL.
Fixes#6992
Before this change the pikpak backend changed the global
--multi-thread-streams flag which wasn't desirable.
Now the machinery is in place to use the NoMultiThreading feature flag
instead.
Fixes#6915
Some servers return a 501 error when using MLST on a non-existing
directory. This patch allows it.
I don't think this is correct usage according to the RFC, but the RFC
doesn't explicitly state which error code should be returned for
file/directory not found.
Before this fix a blank line in the MLST output from the FTP server
would cause the "unsupported LIST line" error.
This fixes the problem in the upstream fork.
Fixes#6879
When copying to a backend which has the PartialUploads feature flag
set and can Move files the file is copied into a temporary name first.
Once the copy is complete, the file is renamed to the real
destination.
This prevents other processes from seeing partially downloaded copies
of files being downloaded and prevents overwriting the old file until
the new one is complete.
This also adds --inplace flag that can be used to disable the partial
file copy/rename feature.
See #3770
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
Implement a Partialuploads feature flag to mark backends for which
uploads are not atomic.
This is set for the following backends
- local
- ftp
- sftp
See #3770
Before this change rclone used a normal SFTP rename if present to
implement Move.
However the normal SFTP rename won't overwrite existing files.
This fixes it to either use the POSIX rename extension
("posix-rename@openssh.com") or to delete the source first before
renaming using the normal SFTP rename.
This isn't normally a problem as rclone always removes any existing
objects first, however to implement non --inplace operations we do
require overwriting an existing file.