As an extra security feature some FTP servers (eg FileZilla) require
that the data connection re-use the same TLS connection as the control
connection. This is a good thing for security.
The message "TLS session of data connection not resumed" means that it
was not done.
The problem turned out to be that rclone was re-using the TLS session
cache between concurrent connections so the resumed TLS data
connection could from any of the control connections.
This patch makes each TLS connection have its own session cache which
should fix the problem.
This also reverts the ftp library to the upstream version which now
contains all of our patches.
Fixes#7234
storj.io/uplink v1.11.0 comes with an improved logic for uploading large
files where file segments are uploaded concurrently instead of serially.
This allows to fully utilize the network connection during the entire
upload process.
This change enable the new upload logic.
The Swift backend does not always respect the flag telling it to skip
HEADing zero-length objects. This commit fixes that for ls/lsl/lsf.
Swift returns zero length for dynamic large object files when they're
included in a files lookup, which means that determining their size
requires HEADing each file that returns a size of zero. rclone's
--swift-no-large-objects instructs rclone that no large objects are
present and accordingly rclone should not HEAD files that return zero
length.
When rclone is performing an ls / lsf / lsl type lookup, however, it
continues to HEAD any zero length objects it encounters, even with
this flag set. Accordingly, this change causes rclone to respect the
flag in these situations.
NB: It is worth noting that this will cause rclone to incorrectly
report zero length for any dynamic large objects encountered with the
--swift-no-large-objects flag set.
Before this change, rclone always expected --sftp-path-override to be
the absolute SSH path to remote:path/subpath which effectively made it
unusable for wrapped remotes (for example, when used with a crypt
remote, the user would need to provide the full decrypted path.)
After this change, the old behavior remains the default, but dynamic
paths are now also supported, if the user adds '@' as the first
character of --sftp-path-override. Rclone will ignore the '@' and
treat the rest of the string as the path to the SFTP remote's root.
Rclone will then add any relative subpaths automatically (including
unwrapping/decrypting remotes as necessary).
In other words, the path_override config parameter can now be used to
specify the difference between the SSH and SFTP paths. Once specified
in the config, it is no longer necessary to re-specify for each
command.
See: https://forum.rclone.org/t/sftp-path-override-breaks-on-wrapped-remotes/40025
This allows using an external ssh binary instead of the built in ssh
library for making SFTP connections.
This makes another integration test target TestSFTPRcloneSSH:
Fixes#7012
Before this change we released the ssh connection back to the pool
before the upload was finished.
This meant that uploads were re-using the same ssh connection which
reduces throughput.
This releases the ssh connection back to the pool only after the
upload has finished, or on error state.
See: https://forum.rclone.org/t/sftp-backend-opens-less-connection-than-expected/40245
smb2.File implements the WriterAtCloser interface defined in
fs/types.go. Expose it via a OpenWriterAt method on
the fs struct to support multi-threaded writes.
The error is:
Error: failed to configure token with jwt authentication: jwtutil: failed making auth request: 400 Bad Request
With the following additional debug information:
jwtutil: Response Body: {"error":"invalid_grant","error_description":"Please check the 'aud' claim. Should be a string"}
Problem is that in jwt-go the RegisteredClaims type has Audience field (aud claim) that
is a list, while box apparantly expects it to be a singular string. In jwt-go v4 we
currently use there is an alternative type StandardClaims which matches what box wants.
Unfortunately StandardClaims is marked as deprecated, and is removed in the
newer v5 version, so we this is a short term fix only.
Fixes#7114
Before this change the new partial downloads code was causing symlinks
to be copied as regular files.
This was because the partial isn't named .rclonelink so the local
backend saves it as a normal file and renaming it to .rclonelink
doesn't cause it to become a symlink.
This fixes the problem by not copying .rclonelink files using the
partials mechanism but reverting to the previous --inplace behaviour.
This could potentially be fixed better in the future by changing the
local backend Move to change files to and from symlinks depending on
their name. However this was deemed too complicated for a point
release.
This also adds a test in the local backend. This test should ideally
be in operations but it isn't easy to put it there as operations knows
nothing of symlinks.
Fixes#7101
See: https://forum.rclone.org/t/reggression-in-v1-63-0-links-drops-the-rclonelink-extension/39483
Before this change if a directory entry could be listed but not
lstat-ed then rclone would give an error and abort the directory
listing with the error
failed to read directory entry: failed to read directory "XXX": lstat XXX
This change makes sure that the directory listing carries on even
after this kind of error.
The sync will be failed but it will carry on.
This problem was caused by a programming error setting the err
variable in an outer scope when it should have been using a local err
variable.
See: https://forum.rclone.org/t/sync-aborts-if-even-one-single-unreadable-folder-is-encountered/39653
Before this change, if you mounted the root of the smb then it would
give an error on rclone about and periodically in the mount logs:
Statfs failed: bucket or container name is needed in remote
This fix makes the smb backend return empty usage in this case which
will stop the errors and show the default 1P of free space.
See: https://forum.rclone.org/t/error-statfs-failed-bucket-or-container-name-is-needed-in-remote/39631
This introduces a new fs.Option flag, Sensitive and uses this along
with IsPassword to redact the info in the config file for support
purposes.
It adds this flag into backends where appropriate. It was necessary to
add oauthutil.SharedOptions to some backends as they were missing
them.
Fixes#5209
Fix https://github.com/rclone/rclone/issues/7103
Before this change the RegExp validating the endpoint URL was a bit
too strict allowing only /dav/files/USER due to chunking limitations.
This patch adds back support for /dav/files/USER/dir/subdir etc.
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
This reverts commit 9065e921c1.
It turns out the problem for the failing fs/sync tests was the
policies being different for search and create which meant that the
file was being created in one union branch but a diferent one was
found in another branch.
The API seems to have changed and the `totalFileCount` item no longer
tracks the number of files in the directory so is useless for seeing
if the directory is empty.
This patch fixes the problem by seeing whether there are any files or
directories in the folder instead.
This problem was detected by the integration tests.
For some unknown reason the API sometimes returns the name already
exists on a server side copy.
{
"error_id": null,
"error_message": "Name already exist",
"error_type": "NAME_ALREADY_EXIST",
"error_uri": "http://api.put.io/v2/docs",
"extra": {},
"status": "ERROR",
"status_code": 400
}
This patch uploads to a temporary name then renames it which works
around the problem.
This was spotted by the integration tests.
The integration tests spotted that modification times are no longer
being preserved by the putio API in server side move and copy.
This patch explicitly sets the modtime after the server side move or
copy.
In this commit we enabled PartialUploads for the union backend.
3faa84b47c combine,compress,crypt,hasher,union: support wrapping backends with PartialUploads
This turns out to cause test failures in fs/sync so this commit
disables them again pending further investigation.
At some point the sharefile API changed to require the size of the
file in the initial transaction which makes the streaming upload fail
with this error:
upload failed: file size does not match (-2)
This was discovered by the integration tests.
In this commit we discovered a problem with objects being uploaded to
the incorrect object name. It added an integration test for the
problem.
65b2e378e0 drive: fix incorrect remote after Update on object
This test was tripped by the putio backend and this patch fixes the
problem.
Before this patch the Update method had a 50/50 chance of returning
the old object rather than the new updated object.
This was discovered in the integration tests.
This patch fixes the problem by deleting the duplicate object before
we look for the new object.
In this commit we discovered a problem with objects being uploaded to
the incorrect object name. It added an integration test for the
problem.
65b2e378e0 drive: fix incorrect remote after Update on object
This test was tripped by the Storj backend and this patch fixes the
problem.
Storj has a rate limit of 1 per second when uploading to the same
file.
This was being tripped by the integration tests.
This patch fixes it by detecting the error and sleeping for 1 second
before retrying.
See: https://github.com/storj/uplink/issues/149
Before this change a server side copy did not preserve the modtime.
This used to work on nextcloud but at some point it started ignoring
the `X-Oc-Mtime` header.
This patch sets the modtime explicitly after a server side copy if the
`X-Oc-Mtime` wasn't accepted.
This problem was discovered in the integration tests.
Before this change we were incorrectly identifying the root directory
of the listing and adding it into the listing.
This caused higher layers of rclone to emit the error above.
See #7038