This uses the refactored goftp library which doesn't include the minio
driver. This reduces the binary size by 1.5MB
See: https://gitea.com/goftp/server/pulls/120
Before this fix we took the directory lock to read the ModTime of the
directory. This was causing locking on directories which were being
re-read from the backend.
This commit gives the modtime its own lock so it can be read even when
the directory is being updated.
See: https://forum.rclone.org/t/high-cpu-load-with-rclone-mount/17604
Before this change files that were in the cache and renamed with
--vfs-cache-mode minimal weren't renamed at all.
This fixes the problem and adds tests for all the different
combinations of cache modes and in and out of the cache.
In this commit (released in v1.52.0)
6ca7198f mount: fix disappearing cwd problem
SetSys was introduced to cache node lookups.
Unfortunately taking the vfs.(*Dir) lock in SetSys causes any FUSE
operations on a directory to pile up behind slow directory listings.
In some situations this leads to very high load.
This commit fixes it by using atomic operations to read and write the
Sys value make it independent of the lock.
See: https://forum.rclone.org/t/high-cpu-load-with-rclone-mount/17604
See: #4104
The documentation contains an embedded datetime which do not read
SOURCE_DATE_EPOCH. This makes the documentation unreproducible when
distribution tries to recreate previous build of the package.
This patch ensures we attempt to read the environment variable before
default to the current build time.
Signed-off-by: Morten Linderud <morten@linderud.pw>
Removed comma from the end of the Azure AD Applications List Blade URL since it was not resolving and customers were opening up support tickets with the Microsoft Azure AD team.
Previous to this fix if Region was not set and Endpoint was not set
then we set the endpoint to "https://s3.amazonaws.com/".
This is unecessary because if the Region alone isn't set then we set
it to "us-east-1" which has the same endpoint.
Having the endpoint set breaks the bucket region auto detection with
the error "Failed to update region for bucket: can't set region to
"xxx" as endpoint is set".
This fix removes that check.
At some point Purge stopped deleting directory markers. We don't have
an integration test for this so it went unnoticed.
This patch fixes the problem but doesn't introduce an integration test
as we don't have a framework for making directory markers yet.
Before this change, large objects which had had their contents deleted
would return "Object not found" and break the listing.
This change makes these objects appear as 0 sized entities so they can
be listed and deleted.
It is a source of confusion for users that `rclone mkdir` on a remote
which can't have empty directories such as s3/b2 does nothing.
This emits a warning at NOTICE level if the user tries to mkdir a
directory not at the root for a remote which can't have empty
directories.
See: https://forum.rclone.org/t/mkdir-on-b2-consider-adding-some-output/17689
Previous to the fix, if an item was being uploaded and it was renamed,
the upload would fail with missing checksum errors.
This change cancels any uploads in progress if the file is renamed.
This is a small patch to remove a defer statement found in a for loop.
It instead closes the file after it is done copying the bytes from the
tar file reader.
Pcloud appears to have opened up a new region and they are returning
the hostname in the oauth callback, thus
GET /?code=XXX&locationid=1&hostname=api.pcloud.com&state=XXX HTTP/1.1
GET /?code=XXX&locationid=2&hostname=eapi.pcloud.com&state=XXX HTTP/1.1
This isn't documented yet, however pCloud have confirmed that this is
the correct interpretation.
Rclone now reads the "hostname" parameter in the oauth callback and
stores it in the config file. It uses it for all subequent API calls.
Previous to this a dangling shortcut would error the directory
listing.
This patch makes dangling shortcuts appear as 0 sized objects in the
directory listing so they can be deleted. These objects can't be read
though.
Before this change, if the cache was given a source `remote:file` it
stored `remote:` with the error `fs.ErrorIsFile` attached. This meant
that if it `remote:` was subsequently looked up it would return the
`fs.ErrorIsFile` error.
This broke `moveto remote:file remote:file2` as moveto would lookup
`remote:` from the second argument and erroneously get the
`fs.ErrorIsFile` error.
This likely broke other commands too.
This was broken in
4c9836035 fs/cache: Add Pin and Unpin and canonicalised lookup
Which was released in v1.52.0
The fix is to make a new cache entry for `remote:` with no error
attached in the case that the original call returned `fs.ErrorIsFile`.
For some objects the onedrive backend has been doing a server side
copy and a delete when a server side move would have worked OK.
This was caused by not detecting the home drive correctly (when it was
an empty string) and assuming that these transfers were cross drive.
This is fixed by comparing canonicalizing drive IDs before comparing them.
Currently credentials are required to download a public bucket file
which is not really necessary and makes automated usage more complex.
Add a new option "anonymous" which when enabled configures the gcs
backend to use an anonymous HTTP client. This of course only works
for read access and trying to write will lead to errors like that:
"googleapi: Error 401: Anonymous caller does not not have
storage.objects.create access to the Google Cloud Storage object.",
as expected. By default the anonymous access option is disabled so that
the GCS Application Default Credentials are still used by default as
before and an error is given if they can't be found.
Before this change rclone used the relative path from the current
working directory.
It appears that WS FTP doesn't like this and the openssh sftp tool
also uses absolute paths which is a good reason for switching to
absolute paths.
This change reads the current working directory at startup and bases
all file requests from there.
See: https://forum.rclone.org/t/sftp-ssh-fx-failure-directory-not-found/17436
This was caused by the signal to stop buffering being ignored when
there was no buffer!
This is fixed by explicitly checking for no buffering and stopping.
Before this change, if we restarted an upload after a restart then the
file would get uploaded but never added to the directory listings.
This change makes sure we add virtual items to the directory cache
when reloading the cache so that they show up properly.