Previously we were using f which calls f.String() which calls f.Path()
which can cause a deadlock if uses carelessly.
This patch explicitly calls f.Path() or f._path() to bring attention
to the fact that there is a call to a method.
As part of this we take a copy of the directory path as calling
d.Path() violates the total locking order.
See the comment at the top of file.go for details
GOOS and GOARCH being set like this makes it impossible to compile on
other archs.
For me GOARCH prevents compilation on my ARM machine.
For others GOOS will prevent windows.
xref https://github.com/rclone/rclone/issues/4086
This allows rclone to exit with a non-zero return code if no files are
transferred. This is useful when calling rclone as part of a workflow/script
pipeline as it allows the end user to stop processing if no files have been
transferred.
NB: Enabling this option will return in rclone exiting non-zero if there are no
transfers. Depending on how your're currently using rclone in your scripts,
this may break existing setups!
Previously we had a map of pools for different chunk sizes.
In practice the mapping is not very useful and requires a lock.
Pools of size other that ChunkSize can only happen when we have a huge file (over 10k * ChunkSize).
We need to have a bunch of identically sized huge files.
In such case most likely ChunkSize should be increased.
The mapping and its lock is replaced with a single initialised pool for ChunkSize, in other cases pool is allocated and freed on per file basis.
--files-from parses input files by ignoring comments starting with # and ;
and stripping whitespace from start and end of strings.
The --files-from-raw flag was added that reads every line from the file ignoring
comment characters and not stripping whitespace while maintaining
backwards compatibility.
Fixes#3762
Rclone can't safely delete files with multiple parents without
PATCHing the parents list. This can be done, but since multiple
parents are going away to be replaced by drive shortcuts we return an
error for now.
See #4013
Before this change if there were two files with the same name and the
same ID in the same directory, dedupe would delete one of them but
since these are actually the same file (with the same ID) then both
files would be deleted leading to data loss.
This should never actually happen, however it did happen as part of a
bug introduced in rclone which was fixed by
dfc7215bf9 drive: fix duplicate items when using --drive-shared-with-me #4018
This change checks to see if any of the duplicates have the same ID
and if they do it refuses to delete them.
Before this change these tests attempted to measure transfers and
checks in lieu of having a rename statistic with a very complicated
heuristic.
The change switches over to using the rename statistic which should be
100% reliable.
Before this change the first pass of --delete-before would output
"There was nothing to transfer" and then proceed to transfer things.
This makes sure the message isn't printed in the delete phase.
See: https://forum.rclone.org/t/incorrect-debug-output/15267
Without explaining exactly how this is generated, it can be confusing
and worrying to not know how the password that encrypts your data is
stored.
This also brings peace of mind to the user that even though
the same password is obscured differently each time, all the data to
get back to the original password remains. Explaining how it works
is much better than the reader of the documentation having to trust
a blackboxy/magical mechanism.