This was caused by
a9bd0c8de6 s3: reduce memory consumption for s3 objects
Which assumed that the StorageClass would always be set, but it isn't
set for Versions.
The BSD-style license that Go uses requires the license to be included
with the source distribution; so add it as LICENSE.wasmexec (to avoid
confusion with the other licenses in rclone) and note the location of
the license in wasm_exec.js itself.
The updates the authentication to include
- Auth from the environment
1. Environment Variables
2. Managed Service Identity Credentials
3. Azure CLI credentials (as used by the az tool)
- Account and Shared Key
- SAS URL
- Service principal with client secret
- Service principal with certificate
- User with username and password
- Managed Service Identity Credentials
And rationalises the auth order.
Normally rclone will check the container exists before uploading if it
hasn't listed the container yet.
Often rclone will be running with a limited set of permissions which
means rclone can't create the container anyway, so this stops the
check.
This will save a transaction.
This commit switches from using the old Azure go modules
github.com/Azure/azure-pipeline-go/pipeline
github.com/Azure/azure-storage-blob-go/azblob
github.com/Azure/go-autorest/autorest/adal
To the new SDK
github.com/Azure/azure-sdk-for-go/
This stops rclone using deprecated code and enables the full range of
authentication with Azure.
See #6132 and #5284
Before this change, rclone would enter a listing loop if it used v2
listing on a v1 server and the list exceeded 1000 items.
This change detects the problem and gives the user a helpful message.
Fixes#6600
* fs: add TerminalColorMode type
* fs: add new config(flags) for TerminalColorMode
* lib/terminal: use TerminalColorMode to determine how to handle colors
* Add documentation for '--terminal-color-mode'
* tree: remove obsolete --color replaced by global --color
This changes the default behaviour of tree. It now displays colors by
default instead of only displaying them when the flag -C/--color was
active. Old behaviour (no color) can be achieved by setting --color to
'never'.
Fixes: #6604
Copying the storageClass string instead of using a pointer to the original string.
This prevents the Go garbage collector from keeping large amounts of
XMLNode structs and references in memory, created by xmlutil.XMLToStruct()
from the aws-sdk-go.
This commit uses the MLST command (where available) to get the status
for single files rather than listing the parent directory and looking
for the file. This makes actions such as using `--files-from` much quicker.
* use getEntry to lookup remote files when supported
* findItem now expects the full path directly
It makes the expected argument similar to the getInfo method, the
difference now is that one is returning a FileInfo whereas
the other is returning an ftp Entry.
Fixes#6225
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
The previous version used values after the maximum Unicode code-point
to encode a key. This could lead to an overflow since a key is a int16,
a rune is int32 and the maximum Unicode code-point is larger than int16.
A better solution is to simply use negative runes for keys.
Before this fix, opening a file with `O_CREATE|O_RDONLY` caused an IO error to
be returned when using `--vfs-cache-mode off` or `--vfs-cache-mode writes`.
This was because the file was opened with read intent, but the `O_CREATE`
implies write intent to create the file even though the file is opened
`O_RDONLY`.
This fix sets write intent for the file if `O_CREATE` is passed in which fixes
the problem for all the VFS cache modes.
It also extends the exhaustive open flags testing to `--vfs-cache-mode writes`
as well as `--vfs-cache-mode full` which would have caught this problem.
See: https://forum.rclone.org/t/i-o-error-trashing-file-on-sftp-mount/34317/
Before this fix a chain compress -> crypt -> s3 was giving errors
BadDigest: The Content-MD5 you specified did not match what we received.
This was because the crypt backend was encrypting the underlying local
object to calculate the hash rather than the contents of the metadata
stream.
It did this because the crypt backend incorrectly identified the
object as a local object.
This fixes the problem by making sure the crypt backend does not
unwrap anything but fs.OverrideRemote objects.
See: https://forum.rclone.org/t/not-encrypting-or-compressing-before-upload/32261/10
Before this change we were putting connections into the connection
pool which had a local context in.
This meant that when the operation had finished the context was
cancelled and the connection became unusable.
See: https://forum.rclone.org/t/failed-to-sync-context-canceled/34017/
Before this patch a deadlock could occur if the cache cleaner was
running when an object upload finished.
This fixes the problem by delaying marking the object as clean until
we have notified the VFS layer. This means that the cache cleaner
won't consider the object until **after** the VFS layer has been
notified, thus avoiding the deadlock.
See: https://forum.rclone.org/t/rclone-mount-deadlock-when-dir-cache-time-strikes/33486/
Before this change, when using -server-side-across-configs rclone
would direct Move/Copy/DirMove to the destination server.
However this should be directed to the source server. This is a little
unclear in the RFC, but the name of the parameter "Destination:" seems
clear and this is how dCache and Rucio have implemented it.
See: https://forum.rclone.org/t/webdav-copy-request-implemented-incorrectly/34072/
In this commit
8d1fff9a82 local: obey file filters in listing to fix errors on excluded files
We introduced the concept of local backend filters.
Unfortunately the filters were being applied before we had resolved
the symlink to point to a directory. This meant that symlinks pointing
to directories were filtered out when they shouldn't have been.
This was fixed by moving the filter check until after the symlink had
been resolved.
See: https://forum.rclone.org/t/copy-links-not-following-symlinks-on-1-60-0/34073/7
Before this change we truncated files in the backing store regardless
of whether we needed to or not.
After, we check to see if the file is the right size and don't
truncate if it is.
Apparently Windows Defender likes to check executables each time they
are modified, and truncating a file to its existing size is enough to
trigger the Windows Defender scan. This was causing a big slowdown for
operations which opened and closed the file a lot, such as looking at
properties on an executable.
See: https://forum.rclone.org/t/for-mount-sftp-why-right-click-on-exe-file-is-so-slow-until-it-freezes/33830