We upgraded our minimum Go version in commit ca24447090. We can now use
the built-in `min` and `max` functions directly.
Reference: https://go.dev/ref/spec#Min_and_max
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
This flag allows users to disable the reflink cloning feature and instead force
"deep" copies, for certain use cases where data redundancy is preferable. It is
functionally equivalent to using `--disable Copy` on local.
Before this change, macOS-specific metadata was not preserved by rclone, even for
local-to-local transfers (it does not use the "user." prefix, nor is Mac metadata
limited to xattrs.) Additionally, rclone did not take advantage of APFS's native
"cloning" functionality for fast and deduplicated transfers.
After this change, local (on macOS only) supports "server-side copy" similarly to
other remotes, and achieves this by using (when possible) macOS's native APFS
"cloning", which is the same underlying mechanism deployed when a user
duplicates a file via the Finder UI. This has several advantages over the
previous behavior:
- It is extremely fast (even large files can be cloned instantly)
- It is very efficient in terms of storage, as it automatically deduplicates when
possible (i.e. so that having two identical files does not consume more storage
than having just one.) (The concept is similar to a "hard link", but subsequent
modifications will not affect the original file.)
- It preserves Mac-specific metadata to the maximum degree, including not only
xattrs but also metadata not easily settable by other methods, including Finder
and Spotlight params.
When server-side "clone" is not available (for example, on non-APFS volumes), it
falls back to server-side "copy" (still preserving metadata but using more disk
storage.) It is only used when both remotes are local (and not wrapped by other
remotes, such as crypt.) The behavior of local on non-mac systems is unchanged.
This adds a new optional parameter to the backend, to specify a path
to a unix domain socket to connect to, instead the specified URL.
The URL itself is still used for the rest of the HTTP client, allowing
host and subpath to stay intact.
This allows using rclone with the webdav backend to connect to a WebDAV
server provided at a Unix Domain socket:
rclone serve webdav --addr unix:///tmp/my.socket remote:path
rclone --webdav-unix-socket /tmp/my.socket --webdav-url http://localhost lsf :webdav:
This converts the ChunkedReader into an interface and provides two
implementations one sequential and one parallel.
This can be used to improve the performance of the VFS on high
bandwidth or high latency links.
Fixes#4760
There were a lot of instances of this lint error
printf: non-constant format string in call to github.com/rclone/rclone/fs.Logf (govet)
Which were fixed by re-arranging the arguments and adding "%s".
There were quite a few genuine bugs which were found too.
When copying Google Docs to Backblaze B2 errors like this would happen
ERROR : test.docx: Failed to calculate src hash: hash type not supported
ERROR : test.docx: corrupted on transfer: sha1 hashes differ src
This was due to an oversight in
8fd66daab6 drive: add support of SHA-1 and SHA-256 checksum
Which omitted to change the base object (which includes Google Docs) so
that it supported SHA-1 and SHA-256.
Cloudflare will normally automatically decompress files with
`Content-Encoding: gzip` when downloaded. This is not what AWS S3 does
and it breaks the integration tests.
This fudges the integration tests to upload the test file with
`Cache-Control: no-transform` on Cloudflare R2 and puts a note in the
docs about this problem.
SDK v2 conversion
Changes
- `--s3-sts-endpoint` is no longer supported
- `--s3-use-unsigned-payload` to control use of trailer checksums (needed for non AWS)
Pikpak can accelerate file uploads by leveraging existing content
in its storage (identified by a custom hash called gcid).
Previously, file transfer statistics were incorrect for uploads
without outbound traffic as the input stream remained unchanged.
This commit addresses the issue by:
* Removing unnecessary unwrapping/wrapping of accountings
before/after gcid calculation, leading immediate AccountRead() on buffering.
* Correctly tracking file transfer statistics for uploads
with no incoming/outgoing traffic by marking them as Server Side Copies.
This change ensures correct statistics tracking and improves overall user experience.
This commit optimizes the PikPak upload process by pre-fetching the Global
Content Identifier (gcid) from the API server before calculating it locally.
Previously, a gcid required for uploads was calculated locally. This process was
resource-intensive and time-consuming. By first checking for a cached gcid
on the server, we can potentially avoid the local calculation entirely.
This significantly improves upload speed especially for large files.
fix#7824
Statements like rclone copy <somewhere> . will spontaneously miss
if . expands to a path with a Full Width replacement character.
This is due to the incorrect order in which
relative paths and decoding were handled in the original implementation.