Before this fix setting an alias of `s3:bucket` then using `alias:..`
would use the current working directory!
This fix corrects the path parsing. This parsing is also used in
wrapping backends like crypt, chunker, union etc.
It does not allow looking above the root of the alias, so `alias:..`
now lists `s3:bucket` as you might expect if you did `cd /` then
`ls ..`.
If --cache-dir is passed in as a relative path, then rclone will not
be able to turn it into a UNC path under Windows, which means that
file names longer than 260 chars will fail when stored in the cache.
This patch makes the --cache-dir path absolute before using it.
See: https://forum.rclone.org/t/handling-of-long-paths-on-windows-260-characters/20913
Before this change when NewObject was called the b2 backend would list
the directory that the object was in in order to find it.
Unfortunately list calls are Class C transactions and cost more.
This patch switches to using HEAD requests instead. These are Class B
transactions. It is then necessary to parse the headers from response
back into the data that we get from the listing. However B2 returns
exactly the same data, just in a different form.
Rclone will use the old directory listing method when looking for
files with versions as these can't be found via a HEAD request.
This change will particularly benefit --files-from, rclone serve
restic but most operations will see some benefit.
Uplink v1.4.1 provides two important improvements for rclone:
* Fix for a connection handling issue where an open project could
potentially become unusable because the underlying connection had
failed.
* Fix for concurrent use issue in drpc.
Starting September 30th, 2021, the Dropbox OAuth flow will no longer
return long-lived access tokens. It will instead return short-lived
access tokens, and optionally return refresh tokens.
This patch adds the token_access_type=offline parameter which causes
dropbox to return short lived tokens now.
When using `--baseurl` before this patch, if a request was made to the
base URL without a trailing / then rclone would return a 404 error.
Unfortunately GVFS / Nautilus makes the request without the /
regardless of what the user put in.
This patch redirects the request to the base URL with a /. So if the
user was using `--baseurl rclone` then a request to
http://localhost/rclone would be redirected with a 308 response to
http://localhost/rclone/Fixes#4814
Before this change rclone would upload the whole of multipart files
before receiving a message from dropbox that the path was too long.
This change hard codes the 255 rune limit and checks that before
uploading any files.
Fixes#4805
Before this change, rclone would retry files with filenames that were
too long again and again.
This changed recognises the malformed_path error that is returned and
marks it not to be retried which stops unnecessary retrying of the file.
See #4805
Before this change rclone was using the copy endpoint to copy large objects.
This can fail for large objects with this error:
Error 413: Copy spanning locations and/or storage classes could
not complete within 30 seconds. Please use the Rewrite method
This change makes Copy use the Rewrite method as suggested by the
error message which should be good for any size of copy.
Before this change cgofuse and libatexit would race to see who could
unmount the file system with unpredicatable results. On Linux it could
report an error or not, depending.
This change checks to see if umount is beng called from a signal and
if so leaves the unmounting to cgofuse/libfuse.
See #4804
Yandex appears to ignore mime types set as part of the PUT request or
as part of a PATCH request.
The docs make no mention of being able to set a mime type, so set
WriteMimeType=false indicating the backend can't set mime types on
uploaded files.