It appears that ci.DryRun = true affects the behavior of r.WriteObject on
chunker only, and no other remotes. This change puts a quick bandaid on it by
setting it later on in the test, but perhaps the underlying issue warrants a
closer look at some point... is chunker checking ci.DryRun itself in a way that
no other remote does? If so, should it? (Does this break encapsulation?)
Before this change, operations.moveOrCopyFile had a special section to detect
and handle changing case of a file on a case insensitive remote, but
operations.Move did not. This caused operations.Move to fail for certain
backends that are incapable of renaming a file in-place to an equal-folding name.
(Not all case-insensitive backends have this limitation -- for example, Dropbox
does but macOS local does not.)
After this change, the special two-part-move section from
operations.moveOrCopyFile is factored out to its own function,
moveCaseInsensitive, which is then called from both operations.moveOrCopyFile
and operations.Move.
Before this change it wasn't possible to see where transfers were
going from and to in core/stats and core/transferred.
When use in rclone mount in particular this made interpreting the
stats very hard.
Before this change, bisync could only detect changes based on modtime, and
would refuse to run if either path lacked modtime support. This made bisync
unavailable for many of rclone's backends. Additionally, bisync did not account
for the Fs's precision when comparing modtimes, meaning that they could only be
reliably compared within the same side -- not against the opposite side. Size
and checksum (even when available) were ignored completely for deltas.
After this change, bisync now fully supports comparing based on any combination
of size, modtime, and checksum, lifting the prior restriction on backends
without modtime support. The comparison logic considers the backend's
precision, hash types, and other features as appropriate.
The comparison features optionally use a new --compare flag (which takes any
combination of size,modtime,checksum) and even supports some combinations not
otherwise supported in `sync` (like comparing all three at the same time.) By
default (without the --compare flag), bisync inherits the same comparison
options as `sync` (that is: size and modtime by default, unless modified with
flags such as --checksum or --size-only.) If the --compare flag is set, it will
override these defaults.
If --compare includes checksum and both remotes support checksums but have no
hash types in common with each other, checksums will be considered only for
comparisons within the same side (to determine what has changed since the prior
sync), but not for comparisons against the opposite side. If one side supports
checksums and the other does not, checksums will only be considered on the side
that supports them. When comparing with checksum and/or size without modtime,
bisync cannot determine whether a file is newer or older -- only whether it is
changed or unchanged. (If it is changed on both sides, bisync still does the
standard equality-check to avoid declaring a sync conflict unless it absolutely
has to.)
Also included are some new flags to customize the checksum comparison behavior
on backends where hashes are slow or unavailable. --no-slow-hash and
--slow-hash-sync-only allow selectively ignoring checksums on backends such as
local where they are slow. --download-hash allows computing them by downloading
when (and only when) they're otherwise not available. Of course, this option
probably won't be practical with large files, but may be a good option for
syncing small-but-important files with maximum accuracy (for example, a source
code repo on a crypt remote.) An additional advantage over methods like
cryptcheck is that the original file is not required for comparison (for
example, --download-hash can be used to bisync two different crypt remotes with
different passwords.)
Additionally, all of the above are now considered during the final --check-sync
for much-improved accuracy (before this change, it only compared filenames!)
Many other details are explained in the included docs.
Before this change, a file would sometimes be silently deleted instead of
renamed on macOS, due to its unique handling of unicode normalization. Rclone
already had a SameObject check in place for case insensitivity before deleting
the source (for example if "hello.txt" was renamed to "HELLO.txt"), but had no
such check for unicode normalization. After this change, the delete is skipped
on macOS if the src and dst filenames normalize to the same NFC string.
Example of the previous behavior:
~ % rclone touch /Users/nielash/rename_test/ö
~ % rclone lsl /Users/nielash/rename_test/ö
0 2023-11-21 17:28:06.170486000 ö
~ % rclone moveto /Users/nielash/rename_test/ö /Users/nielash/rename_test/ö -vv
2023/11/21 17:28:51 DEBUG : rclone: Version "v1.64.0" starting with parameters ["rclone" "moveto" "/Users/nielash/rename_test/ö" "/Users/nielash/rename_test/ö" "-vv"]
2023/11/21 17:28:51 DEBUG : Creating backend with remote "/Users/nielash/rename_test/ö"
2023/11/21 17:28:51 DEBUG : Using config file from "/Users/nielash/.config/rclone/rclone.conf"
2023/11/21 17:28:51 DEBUG : fs cache: adding new entry for parent of "/Users/nielash/rename_test/ö", "/Users/nielash/rename_test"
2023/11/21 17:28:51 DEBUG : Creating backend with remote "/Users/nielash/rename_test/"
2023/11/21 17:28:51 DEBUG : fs cache: renaming cache item "/Users/nielash/rename_test/" to be canonical "/Users/nielash/rename_test"
2023/11/21 17:28:51 DEBUG : ö: Size and modification time the same (differ by 0s, within tolerance 1ns)
2023/11/21 17:28:51 DEBUG : ö: Unchanged skipping
2023/11/21 17:28:51 INFO : ö: Deleted
2023/11/21 17:28:51 INFO :
Transferred: 0 B / 0 B, -, 0 B/s, ETA -
Checks: 1 / 1, 100%
Deleted: 1 (files), 0 (dirs)
Elapsed time: 0.0s
2023/11/21 17:28:51 DEBUG : 5 go routines active
~ % rclone lsl /Users/nielash/rename_test/
~ %
Similar to
acf1e2df84,
go1.21.4 appears to have broken sync.MoveDir on Windows because
filepath.VolumeName() returns `\\?` instead of `\\?\C:` in cleanRootPath. It
looks like the Go team is aware of the issue and planning a fix, so this may
only be needed temporarily.
Before this change, a sync to a case insensitive dest (such as macOS / Windows)
would not result in a matching filename if the source and dest had casing
differences but were otherwise equal. For example, syncing `hello.txt` to
`HELLO.txt` would result in the dest filename remaining `HELLO.txt`.
Furthermore, `--local-case-sensitive` did not solve this, as it actually caused
`HELLO.txt` to get deleted!
After this change, `HELLO.txt` is renamed to `hello.txt` to match the source,
only if the `--fix-case` flag is specified. (The old behavior remains the
default.)
Before this change, changing the case of a file on a case insensitive remote
would fatally panic when `--dry-run` was set, due to `moveOrCopyFile`
attempting to access the non-existent `tmpObj` it (would normally have)
created. After this change, the panic is avoided by skipping this step during
a `--dry-run` (with the usual "skipped as --dry-run is set" log message.)
Allows rclone sync to accept the same output file flags as rclone check,
for the purpose of writing results to a file.
A new --dest-after option is also supported, which writes a list file using
the same ListFormat flags as lsf (including customizable options for hash,
modtime, etc.) Conceptually it is similar to rsync's --itemize-changes, but
not identical -- it should output an accurate list of what will be on the
destination after the sync.
Note that it has a few limitations, and certain scenarios
are not currently supported:
--max-duration / CutoffModeHard
--compare-dest / --copy-dest (because equal() is called multiple times for the
same file)
server-side moves of an entire dir at once (because we never get the individual
file objects in the dir)
High-level retries, because there would be dupes
Possibly some error scenarios that didn't come up on the tests
Note also that each file is logged during the sync, as opposed to after, so it
is most useful as a predictor of what SHOULD happen to each file
(which may or may not match what actually DID.)
Only rclone sync is currently supported -- support for copy and move may be
added in the future.
Logger instruments the Sync routine with a status report for each file pair,
making it possible to output a list of the synced files, along with their
attributes and sigil categorization (match/differ/missing/etc.)
It is very customizable by passing in a custom LoggerFn, options, and
io.Writers to be written to. Possible uses include:
- allow sync to write path lists to a file, in the same format as rclone check
- allow sync to output a --dest-after file using the same format flags as lsf
- receive results as JSON when calling sync from an internal function
- predict the post-sync state of the destination
For usage examples, see bisync.WriteResults() or sync.SyncLoggerFn()
Before this change, --no-unicode-normalization and --ignore-case-sync
were respected for rclone check but not for rclone check --checkfile,
causing them to give different results.
This change adds support for --checkfile so that the behavior is consistent.
Before this change, lsf's time format was hard-coded to "2006-01-02 15:04:05",
regardless of the Fs's precision. After this change, a new optional
--time-format flag is added to allow customizing the format (the default is
unchanged).
Examples:
rclone lsf remote:path --format pt --time-format 'Jan 2, 2006 at 3:04pm (MST)'
rclone lsf remote:path --format pt --time-format '2006-01-02 15:04:05.000000000'
rclone lsf remote:path --format pt --time-format '2006-01-02T15:04:05.999999999Z07:00'
rclone lsf remote:path --format pt --time-format RFC3339
rclone lsf remote:path --format pt --time-format DateOnly
rclone lsf remote:path --format pt --time-format max
--time-format max will automatically truncate '2006-01-02 15:04:05.000000000'
to the maximum precision supported by the remote.
Before this change StatsInfo.ResetCounters() and stopAverageLoop()
(when called from time.AfterFunc) could race on StatsInfo.average.
This was because the deferred stopAverageLoop accessed
StatsInfo.average without locking.
For some reason this only ever happened on macOS. This caused the CI
to fail on macOS thus causing the macOS builds not to appear.
This commit fixes the problem with a bit of extra locking.
It also renames all StatsInfo methods that should be called without
the lock to start with an initial underscore as this is the convention
we use elsewhere.
Fixes#7567
Before this change we were only counting moves as checks. This means
that when using `rclone move` the `Transfers` stat did not count up
like it should do.
This changes introduces a new primitive operations.MoveTransfers which
counts moves as Transfers for use where that is appropriate, such as
rclone move/moveto. Otherwise moves are counted as checks and their
bytes are not accounted.
See: #7183
See: https://forum.rclone.org/t/stats-one-line-date-broken-in-1-64-0-and-later/43263/
Before this fix we were not counting transferred files nor transferred
bytes for server side moves/copies.
If the server side move/copy has been marked as a transfer and not a
checker then this accounts transferred files and transferred bytes.
The transferred bytes are not accounted to the network though so this
should not affect the network stats.
The following command will block for 60s(default) when the network is slow or unavailable:
```
rclone --contimeout 10s --low-level-retries 0 lsd dropbox:
```
This change will make it timeout after the expected 10s.
Signed-off-by: rkonfj <rkonfj@gmail.com>
Before this change, if a multithread upload failed (let's say the
source became unavailable) rclone would finalise the file first before
aborting the transfer.
This caused the partial file to be written which would overwrite any
existing files.
This was fixed by making sure we Abort the transfer before Close-ing
it.
This updates the docs to encourage calling of Abort before Close and
updates writerAtChunkWriter to make sure that works properly.
This also reworks the tests to detect this and to make sure we upload
and download to each multi-thread capable backend (we were only
downloading before which isn't a full test).
Fixes#7071
For uploads which are coming from disk or going to disk or going to a
backend which doesn't need to seek except for retries this doesn't
buffer the input.
This dramatically reduces rclone's memory usage.
Fixes#7350
When using `--no-traverse` the march routines call NewObject on each
potential object in the destination.
The concurrency limiter was accidentally arranged so that there were
`--checkers` * `--checkers` NewObject calls going on at once.
This became obvious when using the sftp backend which used too many
connections.
Fixes#5824
After the copy refactor:
179f978f75 operations: refactor Copy into methods on an temporary object
There was some confusion in the code about server side copies - should
they or shouldn't they use partials?
This manifested in unit test failures for remotes which supported
server side Copy and PartialUploads. This combination is rare and only
exists in the sftp backend with the --sftp-copy-is-hardlink flag.
This fix makes the choice that backends which set PartialUploads
always use partials even for server side copies.
operations.Copy had become very unwieldy. This refactors it into
methods on a copy object which is created for the duration of the
copy. This makes it much easier to read and reason about.
This almost 100% backwards compatible. The only difference being that
in the rc options/get output DumpMode will be output as strings
instead of integers. This is a lot more convenient for the user. They
still accept integer inputs though so the fallout from this should be
minimal.
This almost 100% backwards compatible. The only difference being that
in the rc options/get output CutoffMode, LogLevel, TerminalColorMode
will be output as strings instead of integers. This is a lot more
convenient for the user. They still accept integer inputs though so
the fallout from this should be minimal.
Before this change backend types were printing incorrectly as the name
of the type, not what was defined by the Type() method.
This was not working due to not calling the Type() method. However
this needed to be defined on a non-pointer type due to the way the
options are handled.
Before this change, the maximum number of connections was set to 10.
This means that b2 could deadlock while uploading multipart uploads
due to a lock being held longer than it should have been.
Before this change the concurrency used for an upload was rather
inconsistent.
- if size below `--backend-upload-cutoff` (default 200M) do single part upload.
- if size below `--multi-thread-cutoff` (default 256M) or using streaming
uploads (eg `rclone rcat) do multipart upload using
`--backend-upload-concurrency` to set the concurrency used by the uploader.
- otherwise do multipart upload using `--multi-thread-streams` to set the
concurrency.
This change makes the default for the concurrency used be the
`--backend-upload-concurrency`. If `--multi-thread-streams` is set and larger
than the `--backend-upload-concurrency` then that will be used instead.
This means that if the user sets `--backend-upload-concurrency` then it will be
obeyed for all multipart/multi-thread transfers and the user can override them
all with `--multi-thread-streams`.
See: #7056
- fix docs and error messages for multithread
- use sync/errgroup built in concurrency limiting
- re-arrange multithread code
- don't continue multi-thread uploads if one part fails
Before this change, when using --cutoff-mode=soft and --max-duration
rclone deadlocked when the cutoff limit was reached.
This was because the sync objects Pipe became full and nothing was
emptying it because the cutoff was reached.
This changes the context for putting items into the pipe to be the one
that gets cancelled when the cutoff is reached.
See: https://forum.rclone.org/t/sync-command-hanging-using-cutoff-mode-soft-with-max-duration-time-flags/40866
Currently, the average transfer speed will stop calculating 1 minute
after the last queued transfer completes. This causes the average to
stop calculating when checking is slow and the transfer queue becomes
empty.
This change will require all checks to complete before stopping the
average speed calculation.
In this commit:
432d5d1e20 operations: fix overlapping check on case insensitive file systems
We introduced a test that makes no sense. This happens to pass without --fast-list and fail with it.
This removes the test.
Before this change we showed both server side moves and server side
copies as bytes transferred.
This made a nice easy to use stats display, but also caused confusion
for users who saw unrealistic transfer times. It also caused a problem
with --max-transfer and chunker which renames each chunk after
uploading which was counted as a transfer byte.
This patch instead accounts the server side move and copy statistics
as a seperate lines in the stats display which will only appear if
there are any server side moves / copies. This is also output in the
rc.
This gives users something to look at when transfers are running which
was the point of the original change but it now means that transfer
bytes represents data transfers through this rclone instance only.
Fixes#7183
This adds an additional parameter to the creation of each flag. This
specifies one or more flag groups. This **must** be set for global
flags and **must not** be set for local flags.
This causes flags.md to be built with sections to aid comprehension
and it causes the documentation pages for each command (and the
`--help`) to be built showing the flags groups as specified in the
`groups` annotation on the command.
See: https://forum.rclone.org/t/make-docs-for-mortals-not-only-rclone-gurus/39476/
Some changes about test cases:
Because MiddlewareCORS will return early on OPTIONS request,
this middleware should only be used once at NewServer function.
Test cases should pass AllowOrigin config instead of adding
this middleware again.
A new test case was added to test CORS preflight request with
an authenticator. Preflight request should always return 200 OK
regardless of autentications.
Co-authored-by: yuudi <yuudi@users.noreply.github.com>
Before this change, the overlapping check could erroneously give this
error on case insensitive file systems:
Failed to sync: destination and parameter to --backup-dir mustn't overlap
The code was fixed and re-worked to be simpler and more reliable.
See: https://forum.rclone.org/t/backup-dir-cannot-be-in-root-even-when-excluded/39844/
Before this change the new partial downloads code was causing symlinks
to be copied as regular files.
This was because the partial isn't named .rclonelink so the local
backend saves it as a normal file and renaming it to .rclonelink
doesn't cause it to become a symlink.
This fixes the problem by not copying .rclonelink files using the
partials mechanism but reverting to the previous --inplace behaviour.
This could potentially be fixed better in the future by changing the
local backend Move to change files to and from symlinks depending on
their name. However this was deemed too complicated for a point
release.
This also adds a test in the local backend. This test should ideally
be in operations but it isn't easy to put it there as operations knows
nothing of symlinks.
Fixes#7101
See: https://forum.rclone.org/t/reggression-in-v1-63-0-links-drops-the-rclonelink-extension/39483
This introduces a new fs.Option flag, Sensitive and uses this along
with IsPassword to redact the info in the config file for support
purposes.
It adds this flag into backends where appropriate. It was necessary to
add oauthutil.SharedOptions to some backends as they were missing
them.
Fixes#5209
The --progress flag overrides operations.SyncPrintf in order to do its
magic on stdout without interfering with other output.
Before this change the syncFprintf routine in operations (which is
used to print all output to stdout) was taking the
operations.StdoutMutex and the printProgress function in the
--progress routine was also attempting to take the same mutex causing
a deadlock.
This patch fixes the problem by moving the locking from the
syncFprintf function to SyncPrintf. It is then up to the function
overriding this to lock the StdoutMutex. This ensures the StdoutMutex
can never cause a deadlock.
Before this change if using --fast-list on a directory with more than
a few thousand directories in it DirTree.CheckParents became very slow
taking up to 24 hours for a directory with 1,000,000 directories in
it.
This is because it becomes an O(N²) operation as DirTree.Find has to
search each directory in a linear fashion as it is stored as a slice.
This patch fixes the problem by scanning the DirTree for directories
before starting the CheckParents process so it never has to call
DirTree.Find.
After the fix calling DirTree.CheckParents on a directory with
1,000,000 directories in it will take about 1 second.
Anything which calls DirTree.Find can potentially have bad performance
so in the future we should redesign the DirTree to use a different
underlying datastructure or have an index.
https://forum.rclone.org/t/almost-24-hours-cpu-compute-time-during-sync-between-two-large-s3-buckets/39375/
when multi-thread downloading is enabled, rclone used
to send a write to disk after every read, resulting in a lot
of small writes to different locations of the file.
depending on the underlying filesystem or device, it can be more
efficient to send bigger writes.
This commit
3567a47258 fs: make ConfigString properly reverse suffixed file systems
made fs.ConfigString() return the full config of the backend. Because
mount was using this to make a volume name it started to make volume
names with illegal characters in which couldn't be mounted by macOS.
This fixes the problem by making a separate fs.ConfigStringFull() and
using that where appropriate and leaving the original
fs.ConfigString() function untouched.
Fixes#7063
See: https://forum.rclone.org/t/1-63-beta-fails-to-mount-on-macos-with-on-the-fly-crypt-remote/39090
The SIGUSR2 signal handler for bandwidth limits currently only starts
if rclone is started at a time when a bandwidth limit applies. This
means that if rclone starts _outside_ such a time, i.e. with no
bandwidth limits, then enters a time where bandwidth limits do apply,
it will not be possible to use SIGUSR2 to toggle it.
This fixes that by always starting the signal handler, but only
toggling the limiter if there is a bandwidth limit configured.
In 04aa6969a4 we updated the displayed speed to be a rolling
average in core/stats and the progress output but we didn't update the
Prometheus metrics.
This patch updates the Prometheus metrics too.
Fixes#7053
Before this change if doing a recursive directory listing with
`--files-from` if more than `--checkers` files errored (other than
file not found) then rclone would deadlock.
This fixes the problem by exiting on the first error.
Before this change partially uploaded files (when --inplace is not in
effect) would be left lying around in the file system if rclone was
killed in the middle of a transfer.
This adds an exit handler to remove the file and removes it when the
file is complete.
Before this change, some parts of operations called the Open method on
objects directly, and some called NewReOpen to make an object which
can re-open itself on errors.
This adds a new function operations.Open which should be called
instead of fs.Object.Open to open a reliable stream of data and
changes all call sites to use that.
This means `rclone check --download` and `rclone cat` will re-open
files on failures.
See: https://forum.rclone.org/t/does-rclone-support-retries-for-check-when-using-download-flag/38641
Before this change we tested special errors for straight equality.
This works for all normal backends, but the union backend may return
wrapped errors which contain the special error types.
In particular if a pcloud backend was part of a union when attempting
to set modification times the fs.ErrorCantSetModTime return wasn't
understood because it was wrapped in a union.Error.
This fixes the problem by using errors.Is instead in all the
comparisons in operations.
See: https://forum.rclone.org/t/failed-to-set-modification-time-1-error-pcloud-cant-set-modified-time/38596
Set this automatically for any backend which implements UnWrap and
manually for combine and union which can't implement UnWrap but do
overlay other backends.
When copying to a backend which has the PartialUploads feature flag
set and can Move files the file is copied into a temporary name first.
Once the copy is complete, the file is renamed to the real
destination.
This prevents other processes from seeing partially downloaded copies
of files being downloaded and prevents overwriting the old file until
the new one is complete.
This also adds --inplace flag that can be used to disable the partial
file copy/rename feature.
See #3770
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
Implement a Partialuploads feature flag to mark backends for which
uploads are not atomic.
This is set for the following backends
- local
- ftp
- sftp
See #3770
Before this patch, files or directories with unknown modtime would
appear as the current date.
When mounted some systems look at modification dates of directories to
see if they change and having them change whenever they drop out of
the directory cache is not optimal.
See #6986
Before this change we renamed file systems with overridden config with
{suffix}.
However this meant that ConfigString produced a value which wouldn't
re-create the file system.
This uses an internal hash to keep note of what config goes which
which {suffix} in order to remake the config properly.
When using `rclone cat` to print the contents of several files, the
user may want to inject some separator between the files, such as a
comma or a newline. This patch adds a `--separator` option to the `cat`
command to make that possible. The default value remains an empty
string, `""`, maintaining the prior behavior of `rclone cat`.
Closes#6968
Before this change we weren't outputing a debug log on the start of a
transfer for files which existed on the source but not in the
destination.
This was different to the single file copy routine.
If a file has two (or more) extensions and the second (or subsequent)
extension is recognised as a valid mime type, then the suffix will go
before that extension. So `file.tar.gz` would be backed up to
`file-2019-01-01.tar.gz` whereas `file.badextension.gz` would be
backed up to `file.badextension-2019-01-01.gz`
Fixes#6892
In this commit we accidentally removed the global --rc flags.
0df7466d2b cmd/rcd: Fix command docs to include command specific prefix (#6675)
This re-instates them.
Before this change using operations/stat with a remote pointing to a
dir with a trailing / would return a null output rather than the
correct info.
This was because the directory was not found with a trailing slash in
the directory listing.
Fixes#6817
Before this change if both --progress and --interactive were set then
the screen display could become muddled.
This change makes --progress and --interactive use the same lock so
while rclone is asking for interactive questions, the progress will be
paused.
Fixes#6755
This change addresses two issues with commands that re-used
flags from common packages:
1) cobra.Command definitions did not include the command specific
prefix in doc strings.
2) Command specific flag prefixes were added after generating
command doc strings.
The recent changes to remove race conditions from --max-delete have
made these tests fail on chunker with s3 because they do copy then
delete and the deletes are being counted in the --max-delete(-size)
counts.
If using rclone move and --check-first and --order-by then rclone uses
the transfer routine to delete files to ensure perfect ordering.
This will cause the transfer stats to have a larger than expected
number of items in it so we don't enable this by default.
Fixes#6033
There were some places (e.g. deleting files) where we were using
--transfers instead of --checkers to control the concurrency when
files weren't being transferred.
These have been updated to use --checkers.
Before this change, all types of checkers showed "checking" after the
file name despite the fact that not all of them were checking.
After this change, they can show
- checking
- deleting
- hashing
- importing
- listing
- merging
- moving
- renaming
See: https://forum.rclone.org/t/what-is-rclone-checking-during-a-purge/35931/
No need to report hours, minutes, and even seconds when the
ETA is several years, e.g. "292y24w3d23h47m16s". Now only
reports the 3 most significant units, sacrificing precision,
e.g. "292y24w3d", "24w3d23h", "3d23h47m", "23h47m16s".
Fixes#6381
Integer overflow would lead to ETA such as "-255y7w4h11m22s966ms",
as reported in #6381. Now the value will be clipped at the maximum
"292y24w3d23h47m16s", and it will be shown as infinity.
Previously it was limited to plain ASCII (0-9, A-Z, a-z).
Implemented by adding \p{L}\p{N} alongside the \w in the regex,
even though these overlap it means we can be sure it is 100%
backwards compatible.
Fixes#6618
The BSD-style license that Go uses requires the license to be included
with the source distribution; so add it as LICENSE.wasmexec (to avoid
confusion with the other licenses in rclone) and note the location of
the license in wasm_exec.js itself.
* fs: add TerminalColorMode type
* fs: add new config(flags) for TerminalColorMode
* lib/terminal: use TerminalColorMode to determine how to handle colors
* Add documentation for '--terminal-color-mode'
* tree: remove obsolete --color replaced by global --color
This changes the default behaviour of tree. It now displays colors by
default instead of only displaying them when the flag -C/--color was
active. Old behaviour (no color) can be achieved by setting --color to
'never'.
Fixes: #6604
Before this change if we copied files of unknown size, then they lost
their metadata.
This was particularly noticeable using --s3-decompress.
This change adds metadata to Rcat and RcatSized and changes Copy to
pass the metadata in when it calls Rcat for an unknown sized input.
Fixes#6546
Solves link error while running rclone's wasm version. Go's `walltime1` function was renamed to `walltime`. This commit updates wasm_exec.js with the new name.
A very common mistake for new users of rclone is to use a remote name
without a colon. This can be on the command line or in the config when
setting up a crypt backend.
This change checks to see if the user uses a path which matches a
remote name and gives an NOTICE like this if they do
NOTICE: "remote" refers to a local folder, use "remote:" to refer to your remote or "./remote" to hide this warning
See: https://forum.rclone.org/t/sync-to-onedrive-personal-lands-file-in-localfilesystem-but-not-in-onedrive/32956
In this commit
8d1fff9a82 local: obey file filters in listing to fix errors on excluded files
We started using filters in the local backend so the user could short
circuit troublesome files/directories at a low level.
However this caused a number of integration tests to fail. This turned
out to be in backends wrapping the local backend. For example the
combine backend test failed because it changes the paths passed to the
local backend so they no longer match the paths in the current filter.
To fix this, a new feature flag `FilterAware` was added and the
UseFilter context flag is only passed to backends which support it. As
the wrapping backends don't support the flag, this fixes the problems
in the integration tests.
In future the wrapping backends could modify the active filters to
match the path modifications and then they could set the FilterAware
flag.
See #6376
Before this change we assumed that github.com/Unknwon/goconfig was
threadsafe as documented.
However it turns out it is not threadsafe and looking at the code it
appears that making it threadsafe might be quite hard.
So this change increases the lock coverage in configfile to cover the
goconfig uses also.
Fixes#6378
Before this fix, the chunksize calculator was using the previous size
of the object, not the new size of the object to calculate the chunk
sizes.
This meant that uploading a replacement object which needed a new
chunk size would fail, using too many parts.
This fix fixes the calculator to take the size explicitly.
Before this fix, the parsing code gave an error like this
parsing "2022-08-02 07:00:00" as fs.Time failed: expected newline
This was due to the Scan call failing to read all the data.
This patch fixes that, and redoes the tests
Before this change --compare-dest and --copy-dest would check to see
if the compare/copy object existed first, before seeing if the
destination object was present.
This is inefficient, because in most --copy-dest syncs the destination
will be present and the compare/copy object need never be tested.
--compare-dest syncs may also be speeded up if they are done to the
same directory repeatedly.
This fixes the problem by re-arranging the logic so if the transfer is
not needed then the compare/copy object is never tested.
See: https://forum.rclone.org/t/union-with-copy-dest-enabled-is-slower-than-expected/32172