Backends for which additional config is detected (in the config string
or on the command line or as environment variables) will gain a suffix
`{XXXXX}` where `XXXX` is a base64 encoded md5hash of the config
string.
This fixes backend caching with config string remotes.
This much requested feature now works properly:
rclone copy -vv drive,shared_with_me:file.txt drive:
This adds AddOverrideGetter and GetOverride methods to config map and
uses them in fs.ConfigMap.
This enables us to tell which values have been set and which are just
read from the config file or at their defaults.
This also deletes the unused AddGetters method in configmap.
This is implemented as a state machine parser so it can emit sensible
error messages.
It does not use the connection strings elsewhere in rclone yet - see
subsequent commits.
An optional fuzzer is implemented for the Parse function.
Before this change if a config was altered via the rc then when a new
backend was created from that config, if there was a backend already
running from the old config in the cache then it would be used instead
of creating a new backend with the new config, thus leading to
confusion.
This change flushes the fs cache of any backends based off a config
when that config is changed is over the rc.
Before this change the config file needed to be explicitly reloaded.
This coupled the config file implementation with the backends
needlessly.
This change stats the config file to see if it needs to be reloaded on
every config file operation.
This allows us to remove calls to
- config.SaveConfig
- config.GetFresh
Which now makes the the only needed interface to the config file be
that provided by configmap.Map when rclone is not being configured.
This also adds tests for configfile
This change checks the context whenever rclone might retry, and
doesn't retry if the current context has an error.
This fixes the pathological behaviour of `--max-duration` refusing to
exit because all the context deadline exceeded errors were being
retried.
This unfortunately meant changing the shouldRetry logic in every
backend and doing a lot of context propagation.
See: https://forum.rclone.org/t/add-flag-to-exit-immediately-when-max-duration-reached/22723
This change makes dedupe recursively count elements in same-named directories
and make the largest one primary. This allows to minimize the amount of data
moved (or at least the amount of API calls) when dedupe merges them.
It also adds a new fs.Object interface `ParentIDer` with function `ParentID` and
implements it for the drive and opendrive backends. This function returns
parent directory ID for objects on filesystems that allow same-named dirs.
We use it to correctly count sizes of same-named directories.
Fixes#2568
Co-authored-by: Ivan Andreev <ivandeex@gmail.com>
This splits config.go into ui.go for the user interface functions and
authorize.go for the implementation of `rclone authorize`.
It also moves the tests into the correct places (including one from
obscure which was in the wrong place).
If you are using rclone a library you can decide to use the rclone
config file system or not by calling
configfile.LoadConfig(ctx)
If you don't you will need to set `config.Data` to an implementation
of `config.Storage`.
Other changes
- change interface of config.FileGet to remove unused default
- remove MustValue from config.Storage interface
- change GetValue to return string or bool like elsewhere in rclone
- implement a default config file system which panics with helpful error
- implement getWithDefault to replace the removed MustValue
- don't embed goconfig.ConfigFile so we can change the methods
Before this change the core bandwidth limit was limited to upload or
download value if the other value was off.
This fix only applies a core bandwidth limit when both values are set.
Reapply missing bwlimiting which was inserted in
0a932dc1f2 Add --bwlimit for upload and download #1873
But accidentally removed when merging
edfe183ba2 fshttp: add DSCP support with --dscp for QoS with differentiated services
Rclone uses directory exclusions to cut down the listing it has to do,
so before this fix `--exclude dir/` would make sure nothing in `dir/`
was scanned, **except** if --fast-list was used, in which case only
the directory was excluded and everything within it was included.
This is rather unexpected, so this patch makes `--exclude dir/` be
equivalent to `--exclude dir/**`, meaning that excluding a directory
excludes it and its contents.
We can't do the same for --include without changing the semantics of
filtering slightly.
Fixes#3375
Before this change options were read and set in native format. This
means for example nanoseconds for durations or an integer for
enumerated types, which isn't very convenient for humans.
This change enables these types to be set with a string with the
syntax as used in the command line instead, so `"10s"` rather than
`10000000000` or `"DEBUG"` rather than `8` for log level.
This change decreases the edge limiter burst size which dramatically
increases the smoothness of the bandwidth limiting.
The core bandwidth limiter remains with a large burst so it isn't
affected by double rate limiting on the edge limiters.
See: #4395
See: https://forum.rclone.org/t/bwlimit-is-not-really-smooth/20947
This change uses the bwlimit code to apply limits to the receive and
transmit data functions in the HTTP Transport.
This means that all HTTP transactions will have limiting applied -
this includes listings for example.
For HTTP based transorts this makes the limiting in Accounting
redundant and possibly counter productive
TestParseDuration relied on an elapsed time calculation which
would vary based on the system local time. Fix the test by not relying
on the system time location. Also make the test more deterministic
by injecting time in tests rather than using system time.
Fixes#4529.
Before this change attempting to return an error from core/command
failed with a 500 error and a message about unmarshable types.
This is because it was attempting to marshal the input parameters
which get _response added to them which contains an unmarshalable
field.
This was fixed by using the original parameters in the error response
rather than the one modified during the error handling.
This also adds end to end tests for the streaming facilities as used
in core/command.
Before this change calling core/command gave the error
error: response object is required expecting *http.ResponseWriter value for key "_response" (was *http.response)
This was because the http.ResponseWriter is an interface not an object.
Removing the `*` fixes the problem.
This also indicates that this bit of code wasn't properly tested.
The message now includes the flag name to help the user work out what
is happening.
Invalid value for environment variable "RCLONE_VERSION" when setting default
for --version: strconv.ParseBool: parsing "yes": invalid syntax
This commit modifies the operations.hashSum function by adding an alternate code path. This code path is triggered by passing downloadFlag = True. When activated, rclone will download files from the remote and hash them locally. downloadFlag = False preserves the existing behavior of using the remote to retrieve the hash.
This commit modifies HashLister to support the new hashSum method as well as consolidating the roles of HashLister, HashListerBase64, Md5sum, and Sha1sum. The printing of hashes from the function defined in HashLister has been revised to work with --progress. There are light changes to operations.syncFprintf and cmd.startProgress for this.
The unit test operations_test.TestHashSums is modified to support this change and test the download functionality.
The command functions hashsum, md5sum, sha1sum, and dbhashsum are modified to support this change. A download flag has been added and an output-file flag has been added. The output-file flag writes hashes to a file instead of stdout to avoid the need to redirect stdout.
Before this fix setting an alias of `s3:bucket` then using `alias:..`
would use the current working directory!
This fix corrects the path parsing. This parsing is also used in
wrapping backends like crypt, chunker, union etc.
It does not allow looking above the root of the alias, so `alias:..`
now lists `s3:bucket` as you might expect if you did `cd /` then
`ls ..`.
Before this change rclone would upload the whole of multipart files
before receiving a message from dropbox that the path was too long.
This change hard codes the 255 rune limit and checks that before
uploading any files.
Fixes#4805
This is done by making fs.Config private and attaching it to the
context instead.
The Config should be obtained with fs.GetConfig and fs.AddConfig
should be used to get a new mutable config that can be changed.
This adds a context.Context parameter to NewFs and related calls.
This is necessary as part of reading config from the context -
backends need to be able to read the global config.
Fix the copy and move operations that broke in 127f0fc when copying directly
to a remote without a specific destination.
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
Before this change we counted the final summary error as an error,
producing confusing log messages like:
Failed to check with 54 errors: last error was: 53 differences found
This change marks the summary error as already being counted, so the
error message becomes:
Failed to check with 53 errors: last error was: 53 differences found
This change also returns a listing failure in preference to a summary error.
See: https://forum.rclone.org/t/slow-checksum-validation/19763/22