Before this change, macOS-specific metadata was not preserved by rclone, even for
local-to-local transfers (it does not use the "user." prefix, nor is Mac metadata
limited to xattrs.) Additionally, rclone did not take advantage of APFS's native
"cloning" functionality for fast and deduplicated transfers.
After this change, local (on macOS only) supports "server-side copy" similarly to
other remotes, and achieves this by using (when possible) macOS's native APFS
"cloning", which is the same underlying mechanism deployed when a user
duplicates a file via the Finder UI. This has several advantages over the
previous behavior:
- It is extremely fast (even large files can be cloned instantly)
- It is very efficient in terms of storage, as it automatically deduplicates when
possible (i.e. so that having two identical files does not consume more storage
than having just one.) (The concept is similar to a "hard link", but subsequent
modifications will not affect the original file.)
- It preserves Mac-specific metadata to the maximum degree, including not only
xattrs but also metadata not easily settable by other methods, including Finder
and Spotlight params.
When server-side "clone" is not available (for example, on non-APFS volumes), it
falls back to server-side "copy" (still preserving metadata but using more disk
storage.) It is only used when both remotes are local (and not wrapped by other
remotes, such as crypt.) The behavior of local on non-mac systems is unchanged.
This adds an additional flag --unix-socket, and if supplied connects
to the unix socket given.
rclone rcd --rc-addr unix:///tmp/my.socket
rclone rc --unix-socket /tmp/my.socket core/stats
This adds a new optional parameter to the backend, to specify a path
to a unix domain socket to connect to, instead the specified URL.
The URL itself is still used for the rest of the HTTP client, allowing
host and subpath to stay intact.
This allows using rclone with the webdav backend to connect to a WebDAV
server provided at a Unix Domain socket:
rclone serve webdav --addr unix:///tmp/my.socket remote:path
rclone --webdav-unix-socket /tmp/my.socket --webdav-url http://localhost lsf :webdav:
This converts the ChunkedReader into an interface and provides two
implementations one sequential and one parallel.
This can be used to improve the performance of the VFS on high
bandwidth or high latency links.
Fixes#4760
There were a lot of instances of this lint error
printf: non-constant format string in call to github.com/rclone/rclone/fs.Logf (govet)
Which were fixed by re-arranging the arguments and adding "%s".
There were quite a few genuine bugs which were found too.
There were a lot of instances of this lint error
printf: non-constant format string in call to github.com/rclone/rclone/fs.Logf (govet)
Most of these could not easily be fixed so had nolint lines added.
This should probably be done in a neater way perhaps by making
LogColorf/ErrorColorf functions.
When copying Google Docs to Backblaze B2 errors like this would happen
ERROR : test.docx: Failed to calculate src hash: hash type not supported
ERROR : test.docx: corrupted on transfer: sha1 hashes differ src
This was due to an oversight in
8fd66daab6 drive: add support of SHA-1 and SHA-256 checksum
Which omitted to change the base object (which includes Google Docs) so
that it supported SHA-1 and SHA-256.
Cloudflare will normally automatically decompress files with
`Content-Encoding: gzip` when downloaded. This is not what AWS S3 does
and it breaks the integration tests.
This fudges the integration tests to upload the test file with
`Cache-Control: no-transform` on Cloudflare R2 and puts a note in the
docs about this problem.