This commit addresses a potential memory leak in the S3 backend where
strings extracted from large API responses were keeping the entire
response in memory. The issue occurs because Go strings share underlying
memory with their source, preventing garbage collection of large XML
responses even when only small substrings are needed.
Signed-off-by: liubingrun <liubr1@chinatelecom.cn>
The Copy method was downloading the file and uploading it again rather
than server side copying it.
It looks from the docs that the upload process can read a URL so this
might be possible, but the removed code is incorrect.
All user visible Durations should be fs.Duration rather than time.Duration. Suffix is then optional and defaults to s. Additional suffices d, w, M and y are supported, in addition to ms, s, m and h - which are the only ones supported by time.Duration. Absolute times can also be specified, and will be interpreted as duration relative to now.
Before this change, if not using shared key or SAS URL authentication
for the source, rclone gave this error
ManagedIdentityCredential.GetToken() requires exactly one scope
when doing server side copies.
This was introduced in:
3a5ddfcd3c azureblob: implement multipart server side copy
This fixes the problem by creating a temporary SAS URL using user
delegation to read the source blob when copying.
Fixes#8662
The seafile backend used to be able to cope with files called "." and
".." but at some point became unable to do so, causing integration
test failurs.
This adds EncodeDot to the encoding which encodes "." and ".." names.
Linkbox have started issuing 302 redirects on some of their PUT
requests when rclone uploads a file.
This is problematic for several reasons:
1. This is the wrong redirect code - it should be 307 to preserve the method
2. Since Expect/100-Continue isn't supported the whole body gets uploaded
This fixes the problem by first doing a HEAD request on the URL. This
will allow us to read the redirect Location and not upload the body to
the wrong place.
It should still work (albeit a little more inefficiently) if Linkbox
stop redirecting the PUT requests.
See: https://forum.rclone.org/t/linkbox-upload-error/51795Fixes: #8606
This commit improves error handling in two specific scenarios:
* Missing Download Links: A 5-second delay is introduced when a download
link is missing, as low-level retries aren't enough. Empirically, it
takes about 30s-1m for the link to become available. This resolves
failed integration tests: backend: TestIntegration/FsMkdir/FsPutFiles/
ObjectUpdate, vfs: TestFileReadAtNonZeroLength
* Unrecoverable 500 Errors: The shouldRetry method is updated to skip
retries for 500 errors from "idx.shub.mypikpak.com" indicating "no
record for gcid." These errors are non-recoverable, so retrying is futile.
This commit introduces a significant rewrite of PikPak's upload, specifically
targeting direct handling of file uploads rather than relying on the generic
S3 manager. The primary motivation is to address critical upload failures
reported in #8629.
* Added new `multipart.go` file for multipart uploads using AWS S3 SDK.
* Removed dependency on AWS S3 manager; replaced with custom handling.
* Updated PikPak test package with new multipart upload tests,
including configurable chunk size and upload cutoff.
* Added new configuration option `upload_cutoff` to control chunked uploads.
* Defined constraints for `chunk_size` and `upload_cutoff` (min/max values,
validation).
* Adjusted default `upload_concurrency` from 5 to 4.
In this commit the source of the modtime got changed to the wrong object by accident
0b9671313b webdav: add an ownCloud Infinite Scale vendor that enables tus chunked upload support
This reverts that change and fixes the integration tests.
In
b1d774c2e3 combine: implement ListP interface
We introduced the ListP interface to the combine backend. This was
passing the wrong remote to the upstreams. This was picked up by the
integration tests but was ignored by accident.
Due to a change in Go which was enabled by the `go 1.22` in `go.mod`
rclone has stopped skipping junction points ("My Documents" in
particular) if `--skip-links` is set on Windows.
This is because the output from os.Lstat has changed and junction
points are no longer marked with os.ModeSymlink but with
os.ModeIrregular instead.
This fix now skips os.ModeIrregular objects if --skip-links is set on
Windows only.
Fixes#8561
See: https://github.com/golang/go/issues/73827
The API we use for OpenWriterAt seems to have been disabled at pcloud
PUT /file_open?flags=XXX&folderid=XXX&name=XXX HTTP/1.1
gives
{
"result": 2003,
"error": "Access denied. You do not have permissions to perform this operation."
}
So disable OpenWriterAt and hence multipart uploads for the moment.
Before this change, chunker could double-transform a file under certain
conditions, when --name-transform was in use. This change fixes the issue by
ensuring that --name-transform is disabled during internal file moves.
Before this change, rclone would crash if no metadata was updated.
This could happen if the --onedrive-metadata-permissions read was
supplied but metadata to write was supplied.
Fixes#8586
Lyve Cloud v2 no longer provides a shared S3 endpoint like v1 did. Instead, each customer receives
a unique, reseller-specific endpoint. To reflect this change, the S3 backend now requires users to
manually enter their endpoint when selecting Lyve Cloud as a provider.
Previously, users selected from a list of hardcoded Lyve Cloud v1 endpoints. This was not compatible
with Lyve Cloud v2 accounts and could cause confusion or misconfiguration.
This change:
- Removes outdated pre-defined endpoint selection for Lyve Cloud
- Requires users to provide their own endpoint
- Adds a format example to guide correct usage
Before: Users selected a fixed endpoint from a list (v1 only)
After: Users must input their own endpoint (v2-compatible)
Before this fix multipart server side copies would fail.
This problem was due to an incorrect calculation of the number of
parts to transfer - it calculated 1 part to transfer rather than 0.
Pure Storage FlashBlade is an enterprise object storage platform that
provides S3-compatible APIs. This change adds FlashBlade as a new
provider option in the S3 backend.
Before this change, FlashBlade users had to use the "Other" provider
with manual configuration of various compatibility flags. This often
resulted in suboptimal performance due to conservative default settings.
After this change, users can select the "FlashBlade" S3 provider and
get an optimal configuration:
- ListObjectsV2 enabled for better performance
- AWS-compatible multipart ETags for reliable transfers
- Proper handling of "AlreadyOwnedByYou" bucket creation responses
- Path-style URLs by default (virtual-host style with DNS setup)
- Unsigned payloads to ensure compatibility with all rclone features
FlashBlade supports modern S3 features including trailer checksum
algorithms (SHA256, CRC32, CRC32C), object versioning, and lifecycle
management.
Provider settings were verified by testing against a FlashBlade//E
system running Purity//FB 4.5.7.
Documentation and test configurations are included.
Integration test results:
```
go test -v -fast-list -remote TestS3FlashBlade:
PASS
ok github.com/rclone/rclone/backend/s3 232.444s
```
Update the Gofile backend to use the new direct upload endpoint based on the latest API changes.
The previous implementation used dynamic server selection, but Gofile has simplified their API
to use a single upload endpoint at https://upload.gofile.io/uploadfile.
This change:
- Removes server selection logic and related code
- Simplifies the Fs struct by removing server-related fields
- Updates the upload process to use the direct upload URL
This was removed as part of #1716 to fix rclone uploads taking double
the space.
7f744033d8 onedrive: Removed upload cutoff and always do session uploads
As far as I can see, two revisions are still being created for single
part uploads so the default for this flag is set to -1, off.
However it may be useful for experimentation.
See: #8545
Before this change, sometimes, perhaps on heavily loaded sharepoint
servers, uploads would sometimes fail with the error:
{"error":{"code":"itemNotFound","message":"The upload session was not found"}}
This retries the upload after a 5 second delay up to --low-level-retries times.
Fixes#8545
As part of changes to the Google Photos APIs the scopes rclone used
for accessing Google photos have been removed.
This commit replaces the scopes with updated ones.
These aren't as powerful as the old scopes - this means rclone will
only be able to download photos it uploaded from March 31, 2025.
To use these new scopes do `rclone reconnect yourgooglephotosremote:`
Fixes#8434
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>