Before this change options were read and set in native format. This
means for example nanoseconds for durations or an integer for
enumerated types, which isn't very convenient for humans.
This change enables these types to be set with a string with the
syntax as used in the command line instead, so `"10s"` rather than
`10000000000` or `"DEBUG"` rather than `8` for log level.
This change decreases the edge limiter burst size which dramatically
increases the smoothness of the bandwidth limiting.
The core bandwidth limiter remains with a large burst so it isn't
affected by double rate limiting on the edge limiters.
See: #4395
See: https://forum.rclone.org/t/bwlimit-is-not-really-smooth/20947
This change uses the bwlimit code to apply limits to the receive and
transmit data functions in the HTTP Transport.
This means that all HTTP transactions will have limiting applied -
this includes listings for example.
For HTTP based transorts this makes the limiting in Accounting
redundant and possibly counter productive
TestParseDuration relied on an elapsed time calculation which
would vary based on the system local time. Fix the test by not relying
on the system time location. Also make the test more deterministic
by injecting time in tests rather than using system time.
Fixes#4529.
Before this change attempting to return an error from core/command
failed with a 500 error and a message about unmarshable types.
This is because it was attempting to marshal the input parameters
which get _response added to them which contains an unmarshalable
field.
This was fixed by using the original parameters in the error response
rather than the one modified during the error handling.
This also adds end to end tests for the streaming facilities as used
in core/command.
Before this change calling core/command gave the error
error: response object is required expecting *http.ResponseWriter value for key "_response" (was *http.response)
This was because the http.ResponseWriter is an interface not an object.
Removing the `*` fixes the problem.
This also indicates that this bit of code wasn't properly tested.
The message now includes the flag name to help the user work out what
is happening.
Invalid value for environment variable "RCLONE_VERSION" when setting default
for --version: strconv.ParseBool: parsing "yes": invalid syntax
This commit modifies the operations.hashSum function by adding an alternate code path. This code path is triggered by passing downloadFlag = True. When activated, rclone will download files from the remote and hash them locally. downloadFlag = False preserves the existing behavior of using the remote to retrieve the hash.
This commit modifies HashLister to support the new hashSum method as well as consolidating the roles of HashLister, HashListerBase64, Md5sum, and Sha1sum. The printing of hashes from the function defined in HashLister has been revised to work with --progress. There are light changes to operations.syncFprintf and cmd.startProgress for this.
The unit test operations_test.TestHashSums is modified to support this change and test the download functionality.
The command functions hashsum, md5sum, sha1sum, and dbhashsum are modified to support this change. A download flag has been added and an output-file flag has been added. The output-file flag writes hashes to a file instead of stdout to avoid the need to redirect stdout.
Before this fix setting an alias of `s3:bucket` then using `alias:..`
would use the current working directory!
This fix corrects the path parsing. This parsing is also used in
wrapping backends like crypt, chunker, union etc.
It does not allow looking above the root of the alias, so `alias:..`
now lists `s3:bucket` as you might expect if you did `cd /` then
`ls ..`.
Before this change rclone would upload the whole of multipart files
before receiving a message from dropbox that the path was too long.
This change hard codes the 255 rune limit and checks that before
uploading any files.
Fixes#4805
This is done by making fs.Config private and attaching it to the
context instead.
The Config should be obtained with fs.GetConfig and fs.AddConfig
should be used to get a new mutable config that can be changed.
This adds a context.Context parameter to NewFs and related calls.
This is necessary as part of reading config from the context -
backends need to be able to read the global config.
Fix the copy and move operations that broke in 127f0fc when copying directly
to a remote without a specific destination.
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
Before this change we counted the final summary error as an error,
producing confusing log messages like:
Failed to check with 54 errors: last error was: 53 differences found
This change marks the summary error as already being counted, so the
error message becomes:
Failed to check with 53 errors: last error was: 53 differences found
This change also returns a listing failure in preference to a summary error.
See: https://forum.rclone.org/t/slow-checksum-validation/19763/22
Before this change
RCLONE_DRIVE_CHUNK_SIZE=111M rclone help flags | grep drive-chunk-size
Would show the default value, not the setting of RCLONE_DRIVE_CHUNK_SIZE
as the non backend flags do.
This change makes it work as expected by setting the default of the
option to the environment variable.
Fixes#4659
Adds a flag, --progress-terminal-title, that when used with --progress,
will print the string `ETA: %s` to the terminal title.
This also adds WriteTerminalTitle to lib/terminal
Before this change rclone would emit the message
--checksum is in use but the source and destination have no hashes in common; falling back to --size-only
When the source or destination hash was missing as well as when the
source and destination had no hashes in common.
This first case is very confusing for users when the source and
destination do have a hash in common.
This change fixes that and makes sure the error message is not emitted
on missing hashes even when there is a hash in common.
See: https://forum.rclone.org/t/source-and-destination-have-no-hashes-in-common-for-unencrypted-drive-to-local-sync/19531
Before this change we sorted transfers in the stats list solely on
time started. However if --check-first was in use then lots of
transfers could be started in the same millisecond. Because Windows
time resolution is only 1mS this caused the entries to sort equal and
bounce around in the list.
This change fixes the sort so that if the time is equal it uses the
name which should stabilize the order.
Fixes#4599
Before this change the code which summed up the existing transfers
over all the stats groups forgot to add the old transfer time and old
transfers in.
This meant that the speed and elapsedTime got increasingly inaccurate
over time due to the transfers being culled from the list but their
time not being accounted for.
This change adds the old transfers into the sum which fixes the
problem.
This was only a problem over the rc.
Fixes#4569
Before ths fix --cutoff-mode soft and cautious would emit a Fatal
error which stopped the sync immediately.
This fix introduces a new error which is checked in the sync error
processing which stops the sync gracefully.
Fixes#4576
This patch provides the support of synchronous cache space recovery
to allow read threads to recover from ENOSPC errors when cache space
can be recovered from cache items that are not in use or safe to be
reset/emptied .
The patch complements the existing cache cleaning process in two ways.
Firstly, the existing cache cleaning process is time-driven that runs
periodically. The cache space can run out while the cache cleaner
thread is still waiting for its next scheduled run. The io threads
encountering ENOSPC return an internal error to the applications
in this case even when cache space can be recovered to avoid this
error. This patch addresses this problem by having the read threads
kick the cache cleaner thread in this condition to recover cache
space preventing unnecessary ENOSPC errors from being seen by the
applications.
Secondly, this patch enhances the cache cleaner to support cache
item reset. Currently the cache purge process removes cache
items that are not in use. This may not be sufficient when the
total size of the working set exceeds the cache directory's
capacity. Like in the current code, this patch starts the purge
process by removing cache files that are not in use. Cache items
whose access times are older than vfs-cache-max-age are removed first.
After that, other not-in-use items are removed in LRU order until
vfs-cache-max-size is reached. If the vfs-cache-max-size (the quota)
is still not reached at this time, this patch adds a cache reset
step to reset/empty cache files that are still in use but not
dirtied. This enables application processes to continue without
seeing an error even when the working set depletes the cache space
as long as there is not a large write working set hoarding the
entire cache space.
By design this patch does not add ENOSPC error recovery for write
IOs. Rclone does not empty a write cache item until the file data
is written back to the backend upon close. Allowing more cache
space to be consumed by dirty cache items when the cache space is
already running low would increase the risk of exhausting the cache
space in a way that the vfs mount becomes unreadable.
The deadlock was caused in transfermap.go by calling mu.RLock() in one
function then calling it again in a sub function. Normally this is
fine, however this leaves a window where mu.Lock() can be called. When
mu.Lock() is called it doesn't allow the second mu.RLock() and
deadlocks.
Thead 1 Thread 2
String():mu.RLock()
del():mu.Lock()
sortedSlice():mu.RLock() - DEADLOCK
Lesson learnt: don't try using locks recursively ever!
This patch fixes the problem by removing the second mu.RLock(). This
was done by factoring the code that was calling it into the
transfermap.go file so all the locking can be seen at once which was
ultimately the cause of the problem - the code which used the locks
was too far away from the rest of the code using the lock.
This problem was introduced in:
bfa5715017 fs/accounting: sort transfers by start time
Which hasn't been released in a stable version yet
- add a directory to the optional Purge interface
- fix up all the backends
- add an additional integration test to test for the feature
- use the new feature in operations.Purge
Many of the backends had been prepared in advance for this so the
change was trivial for them.
This is preparation for getting the Accounting to check the context,
buf first we need to get it in place. Since this is one of those
changes that makes lots of noise, this is in a seperate commit.
go1.15 introduced a stricter policy for what you can convert with
`string()` and now `go vet` warns if you try to do `string(int)`.
See: https://github.com/golang/go/issues/32479
If the parameter being passed is an object then it can be passed as a
JSON string rather than using the `--json` flag which simplifies the
command line.
rclone rc operations/list fs=/tmp remote=test opt='{"showHash": true}'
Rather than
rclone rc operations/list --json '{"fs": "/tmp", "remote": "test", "opt": {"showHash": true}}'
Before this change, if the cache was given a source `remote:file` it
stored `remote:` with the error `fs.ErrorIsFile` attached. This meant
that if it `remote:` was subsequently looked up it would return the
`fs.ErrorIsFile` error.
This broke `moveto remote:file remote:file2` as moveto would lookup
`remote:` from the second argument and erroneously get the
`fs.ErrorIsFile` error.
This likely broke other commands too.
This was broken in
4c9836035 fs/cache: Add Pin and Unpin and canonicalised lookup
Which was released in v1.52.0
The fix is to make a new cache entry for `remote:` with no error
attached in the case that the original call returned `fs.ErrorIsFile`.
This adds the missing WSAECONNREFUSED error to the list of errors we
can retry under Windows.
> Connection refused. No connection could be made because the target
> computer actively refused it.
It also adds any relevant errors I could see in the error code list.
See: https://forum.rclone.org/t/failing-to-upload-large-file-to-b2/17085
This was caused by using the stats group from the context passed in by the rcd
rather than the global stats group.
Signed-off-by: Gary Kim <gary@garykim.dev>
Before this change `--track-renames-strategy` was broken. The hashing
method it used could declare times that were very close together to be
different.
The time hash was discarded and instead we check the modification time
window on every hash match.
Provided that the user doesn't use `--track-renames-strategy` on a
huge number of identically sized files this will perform just fine.
See: https://forum.rclone.org/t/track-renames-strategy-modtime-doesnt-work/16992/5