This makes the memory controls of the s3 backend inoperative and
replaced with the global ones.
--s3-memory-pool-flush-time
--s3-memory-pool-use-mmap
By using the buffered reader this fixes excessive memory use when
uploading large files as it will share memory pages between all
readers.
Fixes#7141
As an extra security feature some FTP servers (eg FileZilla) require
that the data connection re-use the same TLS connection as the control
connection. This is a good thing for security.
The message "TLS session of data connection not resumed" means that it
was not done.
The problem turned out to be that rclone was re-using the TLS session
cache between concurrent connections so the resumed TLS data
connection could from any of the control connections.
This patch makes each TLS connection have its own session cache which
should fix the problem.
This also reverts the ftp library to the upstream version which now
contains all of our patches.
Fixes#7234
storj.io/uplink v1.11.0 comes with an improved logic for uploading large
files where file segments are uploaded concurrently instead of serially.
This allows to fully utilize the network connection during the entire
upload process.
This change enable the new upload logic.
The Swift backend does not always respect the flag telling it to skip
HEADing zero-length objects. This commit fixes that for ls/lsl/lsf.
Swift returns zero length for dynamic large object files when they're
included in a files lookup, which means that determining their size
requires HEADing each file that returns a size of zero. rclone's
--swift-no-large-objects instructs rclone that no large objects are
present and accordingly rclone should not HEAD files that return zero
length.
When rclone is performing an ls / lsf / lsl type lookup, however, it
continues to HEAD any zero length objects it encounters, even with
this flag set. Accordingly, this change causes rclone to respect the
flag in these situations.
NB: It is worth noting that this will cause rclone to incorrectly
report zero length for any dynamic large objects encountered with the
--swift-no-large-objects flag set.
Before this change, rclone always expected --sftp-path-override to be
the absolute SSH path to remote:path/subpath which effectively made it
unusable for wrapped remotes (for example, when used with a crypt
remote, the user would need to provide the full decrypted path.)
After this change, the old behavior remains the default, but dynamic
paths are now also supported, if the user adds '@' as the first
character of --sftp-path-override. Rclone will ignore the '@' and
treat the rest of the string as the path to the SFTP remote's root.
Rclone will then add any relative subpaths automatically (including
unwrapping/decrypting remotes as necessary).
In other words, the path_override config parameter can now be used to
specify the difference between the SSH and SFTP paths. Once specified
in the config, it is no longer necessary to re-specify for each
command.
See: https://forum.rclone.org/t/sftp-path-override-breaks-on-wrapped-remotes/40025
This allows using an external ssh binary instead of the built in ssh
library for making SFTP connections.
This makes another integration test target TestSFTPRcloneSSH:
Fixes#7012
Before this change we released the ssh connection back to the pool
before the upload was finished.
This meant that uploads were re-using the same ssh connection which
reduces throughput.
This releases the ssh connection back to the pool only after the
upload has finished, or on error state.
See: https://forum.rclone.org/t/sftp-backend-opens-less-connection-than-expected/40245
smb2.File implements the WriterAtCloser interface defined in
fs/types.go. Expose it via a OpenWriterAt method on
the fs struct to support multi-threaded writes.
The error is:
Error: failed to configure token with jwt authentication: jwtutil: failed making auth request: 400 Bad Request
With the following additional debug information:
jwtutil: Response Body: {"error":"invalid_grant","error_description":"Please check the 'aud' claim. Should be a string"}
Problem is that in jwt-go the RegisteredClaims type has Audience field (aud claim) that
is a list, while box apparantly expects it to be a singular string. In jwt-go v4 we
currently use there is an alternative type StandardClaims which matches what box wants.
Unfortunately StandardClaims is marked as deprecated, and is removed in the
newer v5 version, so we this is a short term fix only.
Fixes#7114
Before this change the new partial downloads code was causing symlinks
to be copied as regular files.
This was because the partial isn't named .rclonelink so the local
backend saves it as a normal file and renaming it to .rclonelink
doesn't cause it to become a symlink.
This fixes the problem by not copying .rclonelink files using the
partials mechanism but reverting to the previous --inplace behaviour.
This could potentially be fixed better in the future by changing the
local backend Move to change files to and from symlinks depending on
their name. However this was deemed too complicated for a point
release.
This also adds a test in the local backend. This test should ideally
be in operations but it isn't easy to put it there as operations knows
nothing of symlinks.
Fixes#7101
See: https://forum.rclone.org/t/reggression-in-v1-63-0-links-drops-the-rclonelink-extension/39483
Before this change if a directory entry could be listed but not
lstat-ed then rclone would give an error and abort the directory
listing with the error
failed to read directory entry: failed to read directory "XXX": lstat XXX
This change makes sure that the directory listing carries on even
after this kind of error.
The sync will be failed but it will carry on.
This problem was caused by a programming error setting the err
variable in an outer scope when it should have been using a local err
variable.
See: https://forum.rclone.org/t/sync-aborts-if-even-one-single-unreadable-folder-is-encountered/39653
Before this change, if you mounted the root of the smb then it would
give an error on rclone about and periodically in the mount logs:
Statfs failed: bucket or container name is needed in remote
This fix makes the smb backend return empty usage in this case which
will stop the errors and show the default 1P of free space.
See: https://forum.rclone.org/t/error-statfs-failed-bucket-or-container-name-is-needed-in-remote/39631
This introduces a new fs.Option flag, Sensitive and uses this along
with IsPassword to redact the info in the config file for support
purposes.
It adds this flag into backends where appropriate. It was necessary to
add oauthutil.SharedOptions to some backends as they were missing
them.
Fixes#5209
Fix https://github.com/rclone/rclone/issues/7103
Before this change the RegExp validating the endpoint URL was a bit
too strict allowing only /dav/files/USER due to chunking limitations.
This patch adds back support for /dav/files/USER/dir/subdir etc.
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
This reverts commit 9065e921c1.
It turns out the problem for the failing fs/sync tests was the
policies being different for search and create which meant that the
file was being created in one union branch but a diferent one was
found in another branch.
The API seems to have changed and the `totalFileCount` item no longer
tracks the number of files in the directory so is useless for seeing
if the directory is empty.
This patch fixes the problem by seeing whether there are any files or
directories in the folder instead.
This problem was detected by the integration tests.
For some unknown reason the API sometimes returns the name already
exists on a server side copy.
{
"error_id": null,
"error_message": "Name already exist",
"error_type": "NAME_ALREADY_EXIST",
"error_uri": "http://api.put.io/v2/docs",
"extra": {},
"status": "ERROR",
"status_code": 400
}
This patch uploads to a temporary name then renames it which works
around the problem.
This was spotted by the integration tests.
The integration tests spotted that modification times are no longer
being preserved by the putio API in server side move and copy.
This patch explicitly sets the modtime after the server side move or
copy.
In this commit we enabled PartialUploads for the union backend.
3faa84b47c combine,compress,crypt,hasher,union: support wrapping backends with PartialUploads
This turns out to cause test failures in fs/sync so this commit
disables them again pending further investigation.
At some point the sharefile API changed to require the size of the
file in the initial transaction which makes the streaming upload fail
with this error:
upload failed: file size does not match (-2)
This was discovered by the integration tests.
In this commit we discovered a problem with objects being uploaded to
the incorrect object name. It added an integration test for the
problem.
65b2e378e0 drive: fix incorrect remote after Update on object
This test was tripped by the putio backend and this patch fixes the
problem.
Before this patch the Update method had a 50/50 chance of returning
the old object rather than the new updated object.
This was discovered in the integration tests.
This patch fixes the problem by deleting the duplicate object before
we look for the new object.
In this commit we discovered a problem with objects being uploaded to
the incorrect object name. It added an integration test for the
problem.
65b2e378e0 drive: fix incorrect remote after Update on object
This test was tripped by the Storj backend and this patch fixes the
problem.
Storj has a rate limit of 1 per second when uploading to the same
file.
This was being tripped by the integration tests.
This patch fixes it by detecting the error and sleeping for 1 second
before retrying.
See: https://github.com/storj/uplink/issues/149
Before this change a server side copy did not preserve the modtime.
This used to work on nextcloud but at some point it started ignoring
the `X-Oc-Mtime` header.
This patch sets the modtime explicitly after a server side copy if the
`X-Oc-Mtime` wasn't accepted.
This problem was discovered in the integration tests.
Before this change we were incorrectly identifying the root directory
of the listing and adding it into the listing.
This caused higher layers of rclone to emit the error above.
See #7038
Before this change we were incorrectly identifying the root directory
of the listing and adding it into the listing.
This caused higher layers of rclone to emit the error above.
See #7038
Before this change we were incorrectly identifying the root directory
of the listing and adding it into the listing.
This caused higher layers of rclone to emit the error above.
Fixes#7038
This commit
3567a47258 fs: make ConfigString properly reverse suffixed file systems
made fs.ConfigString() return the full config of the backend. Because
mount was using this to make a volume name it started to make volume
names with illegal characters in which couldn't be mounted by macOS.
This fixes the problem by making a separate fs.ConfigStringFull() and
using that where appropriate and leaving the original
fs.ConfigString() function untouched.
Fixes#7063
See: https://forum.rclone.org/t/1-63-beta-fails-to-mount-on-macos-with-on-the-fly-crypt-remote/39090
Zoho has started returning the results from Range: requests with a 200
response code rather than the technically correct 206 error code.
Before this change this triggered workaround code to deal with Zoho
not obeying Range: requests properly.
This fix tests the returned header for a Content-Range: header and if
it exists assumes it is a valid reply to the Range: request despite
the status being 200.
This problem was spotted by the integration tests.
Before this fix, if the upload failed for some reason the yandex
backend would attempt to retry itself it which would fail immediately
with 400 Bad Request.
Normally we retry uploads at a higher level so they can be done with
new data and this patch does that.
See #7044
Before this change the Errors type in the union backend produced
errors which could not be Unwrapped to test their type.
This adds the (go1.20) Unwrap method to the Errors type which allows
errors.Is to work on these errors.
It also adds unit tests for the Errors type and fixes a couple of
minor bugs thrown up in the process.
See: https://forum.rclone.org/t/failed-to-set-modification-time-1-error-pcloud-cant-set-modified-time/38596
Set this automatically for any backend which implements UnWrap and
manually for combine and union which can't implement UnWrap but do
overlay other backends.
Before this change, the Pikpak backend would always download
the first media item whenever possible, regardless of whether
or not it was the original contents.
Now we check the validity of a media link using the `fid`
parameter in the link URL.
Fixes#6992
Before this change the pikpak backend changed the global
--multi-thread-streams flag which wasn't desirable.
Now the machinery is in place to use the NoMultiThreading feature flag
instead.
Fixes#6915
Some servers return a 501 error when using MLST on a non-existing
directory. This patch allows it.
I don't think this is correct usage according to the RFC, but the RFC
doesn't explicitly state which error code should be returned for
file/directory not found.
Before this fix a blank line in the MLST output from the FTP server
would cause the "unsupported LIST line" error.
This fixes the problem in the upstream fork.
Fixes#6879
Implement a Partialuploads feature flag to mark backends for which
uploads are not atomic.
This is set for the following backends
- local
- ftp
- sftp
See #3770
Before this change rclone used a normal SFTP rename if present to
implement Move.
However the normal SFTP rename won't overwrite existing files.
This fixes it to either use the POSIX rename extension
("posix-rename@openssh.com") or to delete the source first before
renaming using the normal SFTP rename.
This isn't normally a problem as rclone always removes any existing
objects first, however to implement non --inplace operations we do
require overwriting an existing file.
This also produces a warning when rclone detects files have been
blocked because of virus content
server reports this file is infected with a virus - use --onedrive-av-override to download anyway
Fixes#557
Before this change, when Object.Update was called in the drive
backend, it overwrote the remote with that of the object info.
This is incorrect - the remote doesn't change on Update and this patch
fixes that and introduces a new test to make sure it is correct for
all backends.
This was noticed when doing Update of objects in a nested combine
backend.
See: https://forum.rclone.org/t/rclone-runtime-goroutine-stack-exceeds-1000000000-byte-limit/37912
Before this change if the storage class wasn't set on the object, we
didn't set the "tier" metadata.
This made it impossible to filter on tier using the metadata filters.
This returns the "tier" metadata as STANDARD if the storage class
isn't set on the object.
See: https://forum.rclone.org/t/copy-from-s3-to-another-s3-filter-by-storage-class/37861
- Report correct feature flag
- Fix test failures due to that
- don't output the root directory marker
- Don't create the directory marker if it is the bucket or root
- Create directories when uploading files
- Report correct feature flag
- Fix test failures due to that
- don't output the root directory marker
- Don't create the directory marker if it is the bucket or root
- Create directories when uploading files
Before this change, drive would mistakenly identify a folder with a
training slash as a file when passed to NewObject.
This was picked up by the integration tests
In an earlier patch
d5afcf9e34 crypt: try not to return "unexpected EOF" error
This introduced a bug for 0 length files which this fixes which only
manifests if the io.Reader returns data and EOF which not all readers
do.
This was failing in the integration tests.
Before this change using /path/to/file.rclonelink would not find the
file when using -l/--links.
This fixes the problem by doing another stat call if the file wasn't
found without the suffix if -l/--links is in use.
It will also give an error if you refer to a symlink without its
suffix which will not work because the limit to a single file
filtering will be using the file name without the .rclonelink suffix.
need ".rclonelink" suffix to refer to symlink when using -l/--links
Before this change it would use the symlink as a directory which then
would fail when listed.
See: #6855
golang.org/x/oauth2/jws is deprecated: this package is not intended for public use and
might be removed in the future. It exists for internal use only. Please switch to another
JWS package or copy this package into your own source tree.
github.com/golang-jwt/jwt/v4 seems to be a good alternative, and was already
an implicit dependency.
This changes crypt's use of sync.Pool: Instead of storing slices
it now stores pointers pointers fixed sized arrays.
This issue was reported by staticcheck:
SA6002 - Storing non-pointer values in sync.Pool allocates memory
A sync.Pool is used to avoid unnecessary allocations and reduce
the amount of work the garbage collector has to do.
When passing a value that is not a pointer to a function that accepts
an interface, the value needs to be placed on the heap, which means
an additional allocation. Slices are a common thing to put in sync.Pools,
and they're structs with 3 fields (length, capacity, and a pointer to
an array). In order to avoid the extra allocation, one should store
a pointer to the slice instead.
See: https://staticcheck.io/docs/checks#SA6002
This change provides the ability to pass `env_auth` as a parameter to
the drive provider. This enables the provider to pull IAM
credentials from the environment or instance metadata. Previously if no
auth method was given it would default to requesting oauth.
Before this change, change notify would pick up files which were
shared with us as well as file within the drive.
When using an encrypted mount this caused errors like:
ChangeNotify was unable to decrypt "Plain file name": illegal base32 data at input byte 5
The fix tells drive to restrict changes to the drive in use.
Fixes#6771
Before this change the code wasn't taking into account the error
io.ErrUnexpectedEOF that io.ReadFull can return properly. Sometimes
that error was being returned instead of a more specific and useful
error.
To fix this, io.ReadFull was replaced with the simpler
readers.ReadFill which is much easier to use correctly.
Apparently the abort multipart upload call doesn't return while
multipart uploads are in progress on iDrive e2.
This means that if we CTRL-C a multpart upload rclone hangs until the
all parts uploading have completed. However since rclone is uploading
multiple parts at once this doesn't happen until after the entire file
is uploaded.
This was fixed by cancelling the upload context which causes all the
uploads to stop instantly.
When using ssh-agent to hold multiple keys, it is common practice to configure
openssh to use a specific key by setting the corresponding public key as
the `IdentityFile`. This change makes a similar behavior possible in rclone
by having it parse the `key_file` config as the public key when
`key_use_agent` is `true`.
rclone already attempted this behavior before this change, but it assumed that
`key_file` is the private key and that the public key is specified in
`${key_file}.pub`. So for parity with the openssh behavior, this change makes
rclone first attempt to read the public key from `${key_file}.pub` as before
(for the sake of backward compatibility), then fall back to reading it from
`key_file`.
Fixes#6791
Before this change, we would upload files as single part uploads even
if the source MD5SUM was not available.
AWS won't let you upload a file to a locket bucket without some sort
of hash protection of the upload which we don't have with no MD5SUM.
So we switch to multipart upload when the source does not have an
MD5SUM.
This means that if --s3-disable-checksum is set or we are copying from
a source with no MD5SUMs we will copy with multipart uploads.
This patch changes all uploads, not just those to locked buckets
because having no MD5SUM protection on uploads is undesirable.
Fixes#6846
This provider:
- supports the `X-OC-Mtime` header to set the mtime
- calculates SHA1 checksum server side and returns it as a `ME:sha1hex` prop
To differentiate the new hasMESHA1 quirk, the existing hasMD5 and hasSHA1
quirks for Owncloud have been renamed to hasOCMD5 and hasOCSHA1.
Fixes#6837
Sometimes vsftpd returns a 426 error when closing the stream even when
all the data has been transferred successfully. This is some TLS
protocol mismatch.
Rclone has code to deal with this already, but the error returned from
Close was wrapped in a multierror so the detection didn't work.
This properly extract `textproto.Error` from the errors returned by
`github.com/jlaffaye/ftp` in all the cases.
See: https://forum.rclone.org/t/vsftpd-vs-rclone-part-2/36774
This fixes the azureblob backend so it builds again after the SDK
changes.
This doesn't update bazil.org/fuse because it doesn't build on FreeBSD
https://github.com/bazil/fuse/issues/295
Before this fix, a dangling symlink was erroring the sync. It was
writing an ERROR log and causing rclone to exit with an error. The
List method wasn't returning an error though.
This fix makes sure that we don't log or report a global error on a
file/directory that has been excluded.
This feature was first implemented in:
a61d219bc local: fix -L/--copy-links with filters missing directories
Then fixed in:
8d1fff9a8 local: obey file filters in listing to fix errors on excluded files
This commit also adds test cases for the failure modes of those commits.
See #6376
Before this change, if a "--drive-stop-on-upload-limit" was set,
rclone would not stop the upload if a "storageQuotaExceeded" error occurred.
This fix now checks for the "storageQuotaExceeded" error
and "--drive-stop-on-upload-limit", and fails fast.
This change provides the ability to pass `env_auth` as a parameter to
the google cloud storage provider. This enables the provider to pull IAM
credentials from the environment or instance metadata. Previously if no
auth method was given it would default to requesting oauth.
Before this change the hash used for Onedrive Personal was SHA1. From
July 2023 Microsoft is phasing out SHA1 hashes in favour of
QuickXorHash in Onedrive Personal. Onedrive Business and Sharepoint
remain using QuickXorHash as before.
This choice can be changed using the --onedrive-hash-type flag (and
config option) so that SHA1 can be selected while it is still
available in the transition period.
See: https://forum.rclone.org/t/microsoft-is-switching-onedrive-personal-to-quickxorhash-from-sha1/36296/
Before this change if an --s3-profile was set which used AWS STS (eg
to assume a role) and --s3-endpoint was set then rclone would use the
value from --s3-endpoint to contact the STS server which did not work.
This fix implements an endpoint resolver which only overrides the "s3"
service if --s3-endpoint is set. It sends the "sts" service (and any
other service) to the default resolver.
Fixes#6443
See: https://forum.rclone.org/t/s3-profile-failing-when-explicit-s3-endpoint-is-present/36063/
Before this change, all types of checkers showed "checking" after the
file name despite the fact that not all of them were checking.
After this change, they can show
- checking
- deleting
- hashing
- importing
- listing
- merging
- moving
- renaming
See: https://forum.rclone.org/t/what-is-rclone-checking-during-a-purge/35931/
Before this change if --s3-no-head was in use rclone didn't check the
multipart upload ETag at all. However the ETag is returned in the
final POST request when completing the object.
This change uses that ETag from the final POST if --s3-no-head is in
use, otherwise it uses the ETag from a fresh HEAD request.
See: https://forum.rclone.org/t/in-some-cases-rclone-does-not-use-etag-to-verify-files/36095/
This commits ports a fast C-implementation from https://github.com/namazso/QuickXorHash
It uses new crypto/subtle code from go1.20 to avoid the use of unsafe.
Typical speedups are about 25x when using go1.20
goos: linux
goarch: amd64
cpu: Intel(R) Celeron(R) N5105 @ 2.00GHz
QuickXorHash-Before 2.49ms 422MB/s ±11% 100.00%
QuickXorHash-Subtle 87.9µs 11932MB/s ± 5% +2730.83% + 42.17%
Co-Author: @namazso
Uploading 100 files of each 1 MB took 20 seconds before. With above fix it takes around 2 seconds now.
10x time improvement in line with pacer's sleep reduction from 100ms to 10ms
Before this change when uploading files bigger than 1TiB, the chunk
calculator would work out that the chunk size needed to be bigger than
the default 100 MiB to fit within the 10,000 parts limit.
However the uploader was still using the memory pool for the old chunk
size and this caused errors like
panic: runtime error: slice bounds out of range [:122683392] with capacity 100663296
The fix for this is to make a temporary pool with the larger chunk
size and use it during the upload of the large file.
See: https://forum.rclone.org/t/rclone-cannot-complete-upload-to-b2-restarts-upload-frequently/35617/
Before this change we were sending webdav requests to the go http
FileServer. In go1.20 these (rightly) started returning errors which
caused the tests to fail.
The test has been changed to properly mock up an About query and
response so an end to end test of adding headers is possible.
Passwords for encrypted libraries are kept in memory in the server
and flushed after an hour.
This MR fixes an issue when the library password expires after 1 hour.
rclone sync erroneously deleted folders renamed to a different case on
crypts where directory name encryption was disabled and the underlying
remote was case insensitive.
Example: Renaming the folder Test to tEST before a sync to a crypt having
remote=OneDrive:crypt and directory_name_encryption=false could result in
the folder and all its content being deleted. The following sync would
correctly create the tEST folder and upload all of the content.
Additional tests have revealed other potential issues when using
filename_encryption=off or directory_name_encryption=false on case
insensitive remotes. The documentation has been updated to warn about
potential problems when using these combinations.
This error was caused by rclone supplying an empty
`x-ms-blob-public-access:` header when creating a container for
private access, rather than omitting it completely.
This is a valid way of specifying containers should be private, but if
the storage account has the flag "Blob public access" unset then it
gives "409 Public access is not permitted on this storage account".
This patch fixes the problem by only supplying the header if the
access is set.
Fixes#6645
Storj switched to a single global s3 endpoint backed by a BGP routing.
We want to stop advertizing the former regional endpoints and have the
global one as the only option.
Before this change, when a new object was created s3 returns its
versionID (on a versioned bucket) and rclone recorded it in the
object.
This means that when rclone came to delete the object it would delete
it with the versionID.
However it is common to forbid actions with versionIDs on buckets so
as to preserve the historical record and these operations would fail
whereas they succeeded in pre-v1.60.0 versions.
This patch fixes the problem by not recording versions of objects
supplied by the S3 API on upload unless `--s3-versions` or
`--s3-version-at` is used. This makes rclone behave as it did before
v1.60.0 when version support was introduced.
See: https://forum.rclone.org/t/s3-and-intermittent-403-errors-with-file-renames-and-drag-and-drop-operations-in-windows-explorer/34773
This patch implements --use-server-modtime for the Azureblob backend.
It does this by not reading the time from the metadata if the global
flag is set.
When the SDK was upgraded it started delivering metadata where the
keys were not in lower case as per the old SDK.
Rclone normalises the case of the keys for storage in the Object, but
the directory marker check was being done with the unnormalised keys
as it needs to be done before the Object is created.
This fixes the directory marker check to do a case insensitive compare
of the metadata keys.
Before this change, we were taking the version ID straight from the
XML blob returned by the SDK and thus pinning the XML into memory
which bulked up the average memory per object from about 400 bytes to
4k.
Copying the string fixes the excess memory usage.
This reverts commit 4f386a1ccd.
It turns out that Alibaba OSS does support list v2 and the detection
code was wrong.
This means that users of the gov version of Alibaba will have to add
`list_version 1` to their config files.
See #6600
In this commit
ab849b3613 s3: fix listing loop when using v2 listing on v1 server
The ContinuationToken was tested for existence, but it is the
NextContinuationToken that we are interested in.
See: #6600
Previously it was limited to plain ASCII (0-9, A-Z, a-z).
Implemented by adding \p{L}\p{N} alongside the \w in the regex,
even though these overlap it means we can be sure it is 100%
backwards compatible.
Fixes#6618
This was caused by
a9bd0c8de6 s3: reduce memory consumption for s3 objects
Which assumed that the StorageClass would always be set, but it isn't
set for Versions.
The updates the authentication to include
- Auth from the environment
1. Environment Variables
2. Managed Service Identity Credentials
3. Azure CLI credentials (as used by the az tool)
- Account and Shared Key
- SAS URL
- Service principal with client secret
- Service principal with certificate
- User with username and password
- Managed Service Identity Credentials
And rationalises the auth order.
Normally rclone will check the container exists before uploading if it
hasn't listed the container yet.
Often rclone will be running with a limited set of permissions which
means rclone can't create the container anyway, so this stops the
check.
This will save a transaction.
This commit switches from using the old Azure go modules
github.com/Azure/azure-pipeline-go/pipeline
github.com/Azure/azure-storage-blob-go/azblob
github.com/Azure/go-autorest/autorest/adal
To the new SDK
github.com/Azure/azure-sdk-for-go/
This stops rclone using deprecated code and enables the full range of
authentication with Azure.
See #6132 and #5284
Before this change, rclone would enter a listing loop if it used v2
listing on a v1 server and the list exceeded 1000 items.
This change detects the problem and gives the user a helpful message.
Fixes#6600
Copying the storageClass string instead of using a pointer to the original string.
This prevents the Go garbage collector from keeping large amounts of
XMLNode structs and references in memory, created by xmlutil.XMLToStruct()
from the aws-sdk-go.
This commit uses the MLST command (where available) to get the status
for single files rather than listing the parent directory and looking
for the file. This makes actions such as using `--files-from` much quicker.
* use getEntry to lookup remote files when supported
* findItem now expects the full path directly
It makes the expected argument similar to the getInfo method, the
difference now is that one is returning a FileInfo whereas
the other is returning an ftp Entry.
Fixes#6225
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
Before this fix a chain compress -> crypt -> s3 was giving errors
BadDigest: The Content-MD5 you specified did not match what we received.
This was because the crypt backend was encrypting the underlying local
object to calculate the hash rather than the contents of the metadata
stream.
It did this because the crypt backend incorrectly identified the
object as a local object.
This fixes the problem by making sure the crypt backend does not
unwrap anything but fs.OverrideRemote objects.
See: https://forum.rclone.org/t/not-encrypting-or-compressing-before-upload/32261/10
Before this change we were putting connections into the connection
pool which had a local context in.
This meant that when the operation had finished the context was
cancelled and the connection became unusable.
See: https://forum.rclone.org/t/failed-to-sync-context-canceled/34017/
Before this change, when using -server-side-across-configs rclone
would direct Move/Copy/DirMove to the destination server.
However this should be directed to the source server. This is a little
unclear in the RFC, but the name of the parameter "Destination:" seems
clear and this is how dCache and Rucio have implemented it.
See: https://forum.rclone.org/t/webdav-copy-request-implemented-incorrectly/34072/
In this commit
8d1fff9a82 local: obey file filters in listing to fix errors on excluded files
We introduced the concept of local backend filters.
Unfortunately the filters were being applied before we had resolved
the symlink to point to a directory. This meant that symlinks pointing
to directories were filtered out when they shouldn't have been.
This was fixed by moving the filter check until after the symlink had
been resolved.
See: https://forum.rclone.org/t/copy-links-not-following-symlinks-on-1-60-0/34073/7
Fish is different from POSIX-based Unix shells such as bash,
and a bracketed variable references like we use for the
auto-detection echo command is not supported. The command
will return with zero exit code but produce no output on
stdout. There is a message on stderr, but we don't log it
due to the zero exit code:
fish: Variables cannot be bracketed. In fish, please use {$ShellId}.
Fixes#6552
Before this change if we copied files of unknown size, then they lost
their metadata.
This was particularly noticeable using --s3-decompress.
This change adds metadata to Rcat and RcatSized and changes Copy to
pass the metadata in when it calls Rcat for an unknown sized input.
Fixes#6546
Before this change, some files were giving this error when downloaded
from Cloudflare and other providers.
ERROR corrupted on transfer: sizes differ NNN vs MMM
This is because these providers auto gzips the object when rclone
wasn't expecting it to. (AWS does not gzip objects without their being
uploaded gzipped).
This patch adds a quirk to for fix the problem and a flag to control
it. The quirk `might_gzip` is set to `true` for all providers except
AWS.
See: https://forum.rclone.org/t/s3-error-corrupted-on-transfer-sizes-differ-nnn-vs-mmm/33694/Fixes: #6533
Before this fix it was impossible to stop rclone generating an
X-Amx-Acl: header which is incompatible with GCS with uniform access
control and is generally deprecated at AWS.
The API endpoint GetBucketLocation requires
top level permission.
If we do an authenticated head request to a bucket, the bucket location will be returned in the HTTP headers.
Fixes#5066
Before this change if --user-server-modtime was in use the ModTime
could change for an object as we receive it accurate to the nearest ms
in listings, but only accurate to the nearest second in HEAD and GET
requests.
Normally AWS returns the milliseconds as .000 in listings, but if
versions are in use it may not. Storj S3 also seems to return
milliseconds.
This patch tries to keep the maximum precision in the last modified
time, so it doesn't update a last modified time with a truncated
version if the times were the same to the nearest second.
See: https://forum.rclone.org/t/cache-fingerprint-miss-behavior-leading-to-false-positive-stalen-cache/33404/
Before this change rclone used statx() to read the metadata for files
from the local filesystem when `-M` was in use.
Unfortunately statx() was only introduced in kernel 4.11 which was
released in April 2017 so there are current systems (eg Centos 7)
still on kernel versions which don't support statx().
This patch checks to see if statx() is available and if it isn't, it
falls back to using fstatat() which was introduced in Linux 2.6.16
which is guaranteed for all Go versions.
See: https://forum.rclone.org/t/metadata-from-linux-local-s3-failed-to-copy-failed-to-read-metadata-from-source-object-function-not-implemented/33233/
In https://github.com/jlaffaye/ftp/commit/212daf295f the upstream FTP
library changed the way adding your own dialer works which meant that
connections when using explicit FTP were failing.
This patch reworks our connection code to bring it into the
expectations of the library.
Before this fix, if an error ocurred reading the metadata, it could be
set as nil and then used, causing a crash.
This fix changes the readMetadata function so it returns an error, and
the error is always set if the metadata returned is nil.
If mkdir fails then before this change it would have thrown an
error.
After this change, if the error indicated that the directory
already exists then the error is not returned to the user.
This fixes a race condition when two rclone threads are trying to
create the same directory.
In this commit
8d1fff9a82 local: obey file filters in listing to fix errors on excluded files
We started using filters in the local backend so the user could short
circuit troublesome files/directories at a low level.
However this caused a number of integration tests to fail. This turned
out to be in backends wrapping the local backend. For example the
combine backend test failed because it changes the paths passed to the
local backend so they no longer match the paths in the current filter.
To fix this, a new feature flag `FilterAware` was added and the
UseFilter context flag is only passed to backends which support it. As
the wrapping backends don't support the flag, this fixes the problems
in the integration tests.
In future the wrapping backends could modify the active filters to
match the path modifications and then they could set the FilterAware
flag.
See #6376
Before this fix, the chunksize calculator was using the previous size
of the object, not the new size of the object to calculate the chunk
sizes.
This meant that uploading a replacement object which needed a new
chunk size would fail, using too many parts.
This fix fixes the calculator to take the size explicitly.
Before this change, if rclone was run with `-M` on a filesystem
without xattr support, it would error out.
This patch makes rclone detect the not supported errors and disable
xattrs from then on. It prints one ERROR level message about this.
See: https://forum.rclone.org/t/metadata-update-local-s3/32277/7
Before this change, if an object compressed with "Content-Encoding:
gzip" was downloaded, a length and hash mismatch would occur since the
go runtime automatically decompressed the object on download.
If --s3-decompress is set, this change erases the length and hash on
compressed objects so they can be downloaded successfully, at the cost
of not being able to check the length or the hash of the downloaded
object.
If --s3-decompress is not set the compressed files will be downloaded
as-is providing compressed objects with intact size and hash
information.
See #2658
Before this fix, the dropbox backend wasn't decoding the file names
received in changenotify events into rclone standard format.
This meant that changenotify events for filenames which had encoded
characters were failing to be decrypted properly if wrapped in crypt.
See: https://forum.rclone.org/t/rclone-vfs-cache-says-file-name-too-long/31535
Before this patch backends could be shutdown when they fell out of the
cache when they were in use with combine. This was particularly
noticeable with the dropbox backend which gave this error when
uploading files after the backend was Shutdown.
Failed to copy: upload failed: batcher is shutting down
This patch gets the combine remote to pin them until it is finished.
See: https://forum.rclone.org/t/rclone-combine-upload-failed-batcher-is-shutting-down/32168