During a copy/sync command, if an operation fails due to a network
issue and is retried, the underlying io.Reader is re-initialised,
but the stats for bytes already read are not reset, leading to incorrect
stats. THis was fixed by resetting the bytes read when an Account is
re-initialized.
This includes a new directory listing template which was originally
from the Caddy project (used with permission and copyright attribution).
This is used whenever we serve directory listings so `rclone serve
http`, `rclone serve webdav` and `rclone rcd --rc-serve`
This also modifies the tests so they work with the original template which
is easier to debug.
Bind rclone standard input to password command's standard input. This
allows to provide password from a pipe and collect it using cat.
The typical use case is when rclone is on a remote server with an
encrypted configuration. This solved the environment variable
issue (#3368) and the password storage on remote host.
Now the following chain is allowed:
echo 'secret' | ssh host.example.com \
sudo -u rclone \
rclone --config /path/to/rclone.conf \
--password-command 'cat' ls remote:
Signed-off-by: Sébastien Gross <seb•ɑƬ•chezwam•ɖɵʈ•org>
Co-authored-by: Sébastien Gross <seb•ɑƬ•chezwam•ɖɵʈ•org>
if running `rclone rcd --rc-user=admin --rc-pass=admin
--rc-allow-origin="*"`, lots of duplicate warnings apperent in log
Warning: Allow origin set to *. This can cause serious security problems.
Warning: Allow origin set to *. This can cause serious security problems.
....
This is not conducive to analyzing debugging info.
Therefore, let's show it only once.
Before this change we stored cached Fs under the config string the
user gave us. As the result of fs.ConfigString() can often be
different after the backend has canonicalised the paths this meant
that we could not look up backends in the cache reliably.
After this change we store cached Fs under their config string as
returned from fs.ConfigString(f) after the Fs has been created. We
also store a map of user to canonical names (where they are different)
so the users can look up Fs under the names they passed to rclone too.
This change along with Pin and Unpin is necessary so we can look up
the Fs in use reliably in the `backend/command` remote control
interface.
Before this change if you specified --hash MD5 in rclone lsf it would
calculate all the hashes and just return the MD5 hash which was very
slow on the local backend.
Likewise specifying --hash on rclone lsjson was equally slow.
This change introduces the --hash-type flag (and corresponding
internal parameter) so that the hashes required can be selected in
lsjson.
This is used internally in lsf when the --hash parameter is selected
to speed up the hashing by only hashing with the one hash specified.
Fixes#4181
These commands are for implementing backend specific
functionality. They have documentation which is placed automatically
into the backend doc.
There is a simple test for the feature in the backend tests.
Before this fix, FixRangeOption would substitute RangeOptions it
wanted to get rid of with with empty HTTPOption. This caused a problem
now that backends interpret HTTPOptions.
This fix subsitutes those with NullOptions which aren't interpreted as
HTTPOptions. This patch also improves the unit tests.
This allows rclone to exit with a non-zero return code if no files are
transferred. This is useful when calling rclone as part of a workflow/script
pipeline as it allows the end user to stop processing if no files have been
transferred.
NB: Enabling this option will return in rclone exiting non-zero if there are no
transfers. Depending on how your're currently using rclone in your scripts,
this may break existing setups!
--files-from parses input files by ignoring comments starting with # and ;
and stripping whitespace from start and end of strings.
The --files-from-raw flag was added that reads every line from the file ignoring
comment characters and not stripping whitespace while maintaining
backwards compatibility.
Fixes#3762
Before this change if there were two files with the same name and the
same ID in the same directory, dedupe would delete one of them but
since these are actually the same file (with the same ID) then both
files would be deleted leading to data loss.
This should never actually happen, however it did happen as part of a
bug introduced in rclone which was fixed by
dfc7215bf9 drive: fix duplicate items when using --drive-shared-with-me #4018
This change checks to see if any of the duplicates have the same ID
and if they do it refuses to delete them.
Before this change these tests attempted to measure transfers and
checks in lieu of having a rename statistic with a very complicated
heuristic.
The change switches over to using the rename statistic which should be
100% reliable.
Before this change the first pass of --delete-before would output
"There was nothing to transfer" and then proceed to transfer things.
This makes sure the message isn't printed in the delete phase.
See: https://forum.rclone.org/t/incorrect-debug-output/15267
This commit corrects the logic for --track-renames-strategy which
broke the integration tests.
It also improves the parsing of the argument and adds a test for that.
This commit adds the `--track-renames-strategy` flag which allows the
user to choose the strategy for tracking renames when using the
`--track-renames` flag.
This can be "hash" or "modtime" or both currently.
This, when used with `--track-renames-strategy modtime` enables
support for tracking renames in encrypted remotes.
Fixes#3696Fixes#2721
Before this change backends which introduce overhead (eg crypt) were
failing to upload the first file.
This change increases the threshold to 2k to allow the first file to
go through even with some overhead but the next file to definitely
fail.
Before this change we checked the transfer was out of range only
before the Read call. This means that we returned all the data to the
reader before declaring an error. This means that some backends wrote
the file even though an error was returned.
This fix checks the transfer after the Read as well, and chops the
excess characters off the read data if we are over the limit so that
we don't ever deliver all the data.
This fixes the tests introduced as part of 6f1766dd9e and #2672
on backends other than local.
Before this change the exit code for transfer limit exceeded was
incorrect. This was because the `resolveExitCode` function unwraps the
error thus reading the underlying error which is not the same as the
error it was comparing to (`ErrorMaxTransferLimitReached`).
This change fixes it by splitting the error definition in two so that
when the Fatal error is unwrapped we match against
`ErrorMaxTransferLimitReached` however when we return the error we
return `ErrorMaxTransferLimitReachedFatal`.
In bde0334bd8 "operations: fix setting the timestamp on Windows
for multithread copy" the test for multithread copy failed to take
into account the modify window of the remote under test.
Before this fix we attempted to set the modification time on the file
when it was open. This works fine on Linux but not on Windows. The
test was also incorrect testing the source file rather than the
destination file.
This closes the file before setting the modification time and fixes
the tests.
Fixes#3994
Before this change, the elapsed time shown with the --progress flag
would not print ".0s" so the elapsed time.
This change will make it so that the line width is kept a bit more
consistent by always printing to a fixed-point.
This does change the displayed value when the elapsed time
is less than 1s, in which it used to be that the value would be shown
in ms or smaller units.
Signed-off-by: Gary Kim <gary@garykim.dev>
Before this changed we unconditionally fetched the MimeType. On Some
backends like s3 and swift this takes an extra transaction which meant
that `lsf` on those backends was needlessly slow.
This adds an internal option so `lsf` can declare whether it wants
MimeTypes or not depending on whether the user asked for them and an
external flag `--no-mimetype` for `lsjson`.
See: https://forum.rclone.org/t/reliably-setup-incremental-updates/14006/8
This is a timing dependent test and to make it long enough so that it
would work with the remotes would make it too long for local tests.
The code paths are identical for local vs non-local so just run on
local.
This fixes the integration tests.
This gives you more control over how long rclone will run for, making
it easier to script backups, e.g. via cron. Once the `--max-duration`
time limit is reached, no new transfers will be initiated, but those
already in-flight will be allowed to complete.
Fixes#985
Before this change the expect/continue timeout was set to
--conntimeout which was 60s by default which is too long to wait.
This was noticed when using s3 with a proxy which apparently didn't
support expect / continue properly.
Set --expect-continue-timeout 0 to disable expect/continue.
Statistics of ransfers which were interrupted are not cleared before
retry iteration. These transfers completed with over 100 percentage.
This change clears transfer accounting before next retry iteration is
done in order to keep numbers in track.
Fixes#3861
Without the fix we can have a race, example:
```
Write at 0x00c000432039 by goroutine 187:
github.com/rclone/rclone/fs/accounting.(*StatsInfo).Error()
fs/accounting/stats.go:495 +0x3f1
github.com/rclone/rclone/fs/accounting.(*StatsInfo).Error-fm()
fs/accounting/stats.go:477 +0x55
github.com/rclone/rclone/fs/walk.listRwalk.func1()
fs/walk/walk.go:162 +0xd2
github.com/rclone/rclone/fs/walk.walk.func2()
fs/walk/walk.go:402 +0x30f
Previous read at 0x00c000432039 by goroutine 184:
github.com/rclone/rclone/fs/accounting.(*statsGroups).sum()
fs/accounting/stats_groups.go:351 +0xcae
github.com/rclone/rclone/fs/accounting.rcTransferredStats()
fs/accounting/stats_groups.go:132 +0x1f4
```
Fixes#3844
Before this change if an operation was retried on operations.Copy and
the operation was large enough to use an async buffer then an async
buffer was leaked on the retry. This leaked memory, a file handle and
a go routine.
After this change if Account.WithBuffer is called and there is already
a buffer, then a new one won't be allocated.
rclone library users might be intrested in changing default value to
other, or even disabling it. With current version it's impossible which
leads to races when number of uploaded objects exceeds default limit.
Fixes#3732
For few commands, RClone counts a error multiple times. This was fixed by
creating a new error type which keeps a flag to remember if the error has
already been counted or not. The CountError function now wraps the original
error eith the above new error type and returns it.
from rsync manual:
--compare-dest=DIR
This option instructs rsync to use DIR on the destination machine as an
additional hierarchy to compare destination files against doing transfers
(if the files are missing in the destination directory). If a file is found
in DIR that is identical to the sender's file, the file will NOT be
transferred to the destination directory. This is useful for creating
a sparse backup of just files that have changed from an earlier backup.
--copy-dest=DIR
This option behaves like --compare-dest, but rsync will also copy unchanged
files found in DIR to the destination directory using a local copy.
This is useful for doing transfers to a new destination while leaving
existing files intact, and then doing a flash-cutover when all files
have been successfully transferred.
On google fs (drive, google photos, and google cloud storage), if
headless is selected, do not open browser.
This also supplies a new option "auth-no-open-browser" for authorize
if the user does not want it.
This should fix#3323.
This was broken in e337cae0c5 when we deleted the transfers
immediately.
This is fixed by keeping a merged slice of time ranges of completed
transfers and adding those to the current transfers.
In a28239f005 we made --files-from obey --no-traverse. In the
process this caused --files-from without --no-traverse to do a
complete recursive scan unecessarily.
This was only noticeable in users of fs/march, so sync/copy/move/etc
not in ls/lsf/etc.
This fix makes sure that we use conventional directory listings in
fs/march unless `--files-from` and `--no-traverse` is set or
`--fast-list` is active.
Fixes#3619
It was reported that v1.49.4 which was accidentally compiled with
go1.13 instead of go1.12 produced errors like this:
Failed to get StartPageToken: Get https://www.googleapis.com/drive/v3/changes/startPageToken?XXX: stream error: stream ID 1789; INTERNAL_ERROR
IO error: open file failed: Get https://www.googleapis.com/drive/v3/files/XXX?alt=media: stream error: stream ID 1781; INTERNAL_ERROR
These are errors from the http2 library. It appears that go1.13 when
communicating with google drive defaults to http2 whereas with go1.12
it doesn't.
It is unclear what is causing these errors, but retrying them since
they don't happen very often seems like a valid strategy.
This was fixed in v1.49.5 by compiling with go1.12 - this fix is
designed to work with go1.13
See: https://forum.rclone.org/t/1-49-4-plex-internal-errors-on-google-drive/12108/
Before this fix, attempting to set a non top level environment
variable would fail with "Couldn't find flag".
This fixes it by passing in the flags that the env var is being set
from.
Fixes#3615
Before this change --update would transfer any file which was newer
than the destination regardless of whether it had changed or not.
This is needlessly wasteful of bandwidth.
After this change --update will only transfer files if they are newer
**and** they are different (checked with checksum and size).
'Couldn't find file - Need to Transfer' changed to 'Need to transfer -
File Not Found at Destination' because while reading the debug logs, it
confuses with failure in operation.
changes:
- chunker: remove GetTier and SetTier
- remove wdmrcompat metaformat
- remove fastopen strategy
- make hash_type option non-advanced
- adverise hash support when possible
- add metadata field "ver", run strict checks
- describe internal behavior in comments
- improve documentation
note:
wdmrcompat used to write file name in the metadata, so maximum metadata
size was 1K; removing it allows to cap size by 200 bytes now.
In 53a1a0e3ef we introduced a problem where if there was an
error on the file being transferred then the file was re-opened and
the old one wasn't closed.
This was partially fixed in bfbddab46b however this didn't
address the case of the old file being closed.
This is now fixed by
- marking the file as open again in UpdateReader
- moving the stopping the accounting machinery to a new method Done
Before this change you could only configure the local backend flags
which don't have the local prefix (eg `--copy-links`) with
`RCLONE_LOCAL_COPY_LINKS`.
This change makes `RCLONE_COPY_LINKS` valid too which is much more
logical for the users.
Fixes#3534
Before this change it was possible to make a remote with an invalid
name in the config file, either manually or with `rclone config
create` (but not with `rclone config`).
When this remote was used, because it was invalid, rclone would
presume this remote name was a local directory for a very suprising
user experience!
This change checks remote names more carefully and returns errors
- when the user tries to use an invalid remote name on the command line
- when an invalid remote name is used in `rclone config create/update/password`
- when the user tries to enter an invalid remote name in `rclone config`
This does not prevent the user entering a remote name with invalid
characters in the config manually, but such a remote will fail
immediately when it is used on the command line.
Before this change, using -P occasionally deadlocked on the Transfer
mutex when Transfer.Done() was called with a non nil error and the
StatsInfo mutex since they mutually call each other.
This was fixed by making sure that the Transfer mutex is always
released before calling any StatsInfo methods.
This improves on: 6f87267b34Fixes#3505
Before this change if -u/--update was in effect we compared the size
of the files to see if the transfer should go ahead. This was
comparing -1 with an actual size so the transfer always proceeded.
After this change we use the existing `sizeDiffers` function which
does the correct comparison with -1 for files of unknown length.
See: https://forum.rclone.org/t/sync-with-google-photos-to-local-drive-will-result-in-recoping/11605
Without this patch the resulting error is first converted to string and then recreated.
This makes it impossible to use the defined error types to figure out the cause of the error,
and may result in invalid HTTP status codes.
This patch adds a test TestExecuteJobErrorPropagation to validate that the errors are
properly propagated.
Before this change we didn't calculate or check hashes of transferred
files if --size-only mode was explicitly set.
This problem was introduced in 20da3e6352 which was released with v1.37
After this change hashes are checked for all transfers unless
--ignore-checksums is set.
Before this change we calculated the checkums when using
--ignore-checksum but ignored them at the end.
Now we don't calculate the checksums at all which is more efficient.
Before this change for a post copy Hash check we would run the hashes sequentially.
Now we run the hashes in parallel for a useful speedup.
Note that this refactors the hash check in Copy to use the standard
hash checking routine.
The error "tls: bad record MAC" is very likely to be caused by
hardware issues. It indicates that a packet got corrupted somewhere.
As a work around, this change treats it as retriable error which
allows the chunk to get retried and the transfer to continue.
See: https://forum.rclone.org/t/rc-rc-job-expire-interval-bug/11188
rclone was ignoring the --rc-job-expire-duration and --rc-job-interval
flags. This turned out to be an initialization order problem and was
fixed by moving those flags out of global config into rc config.
Before this change, using -P occasionally deadlocked on the transfer
mutex and the stats mutex since they call each other via the progress
printing.
This is fixed by shortening the locking windows and converting the
mutex to a RW mutex.
Prior to this fix, a request such as
rclone lsf -R --include "/dir/**" remote:
Would use ListR which is very inefficient as it lists the whole remote
for one directory.
This changes it to use recursive walking if the filters imply any
directory filtering. So `--include *.jpg` and `--exclude *.jpg` will
still use ListR wheras `--include "/dir/**` will not.
Before this fix rclone calculated all the hashes on transfer. This
was particularly slow for the local backend.
After the fix we just calculate one hash which is enough for data
integrity.
This was factored from fstest as we were including the testing
enviroment into the main binary because of it.
This was causing opening the browser to fail because of 8243ff8bc8.
This adds experimental support for web gui integration so that rclone can fetch and run a web based GUI using the --rc-web-ui and related flags.
It downloads and caches a webui zip file which it then unpacks and opens in the browser.
core/stats can return two different schemas in 'transferring' field.
One is object with fields the other is just plain string.
This is confusing, unnecessary and makes defining response schema
more difficult. It also returns `lastError` as value which can be
rendered differently depending on source of error.
This change standardizes 'transferring' filed to always return
object but with reduced fields if they are not available.
Former string item is converted to {name:remote_name} object.
'lastError' is forced to be a string as in some cases it can be encoded
as an object.
Introduce stats groups that will isolate accounting for logically
different transferring operations. That way multiple accounting
operations can be done in parallel without interfering with each other
stats.
Using groups is optional. There is dedicated global stats that will be
used by default if no group is specified. This is operating mode for CLI
usage which is just fire and forget operation.
For running rclone as rc http server each request will create it's own
group. Also there is an option to specify your own group.
This is done to make clear ownership over accounting object and prepare
for removing global stats object.
Stats elapsed time calculation has been altered to account for actual
transfer time instead of stats creation time.