Before this fix if more than one retry happened on a file that rclone
had opened for read with a backend that uses fs.FixRangeOption then
rclone would read too much data and the transfer would fail.
Backends affected:
- azureblob, azurefiles, b2, box, dropbox, fichier, filefabric
- googlecloudstorage, hidrive, imagekit, jottacloud, koofr, netstorage
- onedrive, opendrive, oracleobjectstorage, pikpak, premiumizeme
- protondrive, qingstor, quatrix, s3, sharefile, sugarsync, swift
- uptobox, webdav, zoho
This was because rclone was emitting Range requests for the wrong data
range on the second and subsequent retries.
This was caused by fs.FixRangeOption modifying the options and the
reopen code relying on them not being modified.
This fix makes a copy of the fs.FixRangeOption in the reopen code to
fix the problem.
In future it might be best to change fs.FixRangeOption so it returns a
new options slice.
Fixes#7759
Before this change, some parts of operations called the Open method on
objects directly, and some called NewReOpen to make an object which
can re-open itself on errors.
This adds a new function operations.Open which should be called
instead of fs.Object.Open to open a reliable stream of data and
changes all call sites to use that.
This means `rclone check --download` and `rclone cat` will re-open
files on failures.
See: https://forum.rclone.org/t/does-rclone-support-retries-for-check-when-using-download-flag/38641
This is possible now that we no longer support go1.12 and brings
rclone into line with standard practices in the Go world.
This also removes errors.New and errors.Errorf from lib/errors and
prefers the stdlib errors package over lib/errors.
- Change rclone/fs interfaces to accept context.Context
- Update interface implementations to use context.Context
- Change top level usage to propagate context to lover level functions
Context propagation is needed for stopping transfers and passing other
request-scoped values.
This puts a shim on the reader opened by Copy so that if an error is
returned, the reader is re-opened at the correct seek point.
This should make downloading very large files more reliable.