Lyve Cloud v2 no longer provides a shared S3 endpoint like v1 did. Instead, each customer receives
a unique, reseller-specific endpoint. To reflect this change, the S3 backend now requires users to
manually enter their endpoint when selecting Lyve Cloud as a provider.
Previously, users selected from a list of hardcoded Lyve Cloud v1 endpoints. This was not compatible
with Lyve Cloud v2 accounts and could cause confusion or misconfiguration.
This change:
- Removes outdated pre-defined endpoint selection for Lyve Cloud
- Requires users to provide their own endpoint
- Adds a format example to guide correct usage
Before: Users selected a fixed endpoint from a list (v1 only)
After: Users must input their own endpoint (v2-compatible)
Pure Storage FlashBlade is an enterprise object storage platform that
provides S3-compatible APIs. This change adds FlashBlade as a new
provider option in the S3 backend.
Before this change, FlashBlade users had to use the "Other" provider
with manual configuration of various compatibility flags. This often
resulted in suboptimal performance due to conservative default settings.
After this change, users can select the "FlashBlade" S3 provider and
get an optimal configuration:
- ListObjectsV2 enabled for better performance
- AWS-compatible multipart ETags for reliable transfers
- Proper handling of "AlreadyOwnedByYou" bucket creation responses
- Path-style URLs by default (virtual-host style with DNS setup)
- Unsigned payloads to ensure compatibility with all rclone features
FlashBlade supports modern S3 features including trailer checksum
algorithms (SHA256, CRC32, CRC32C), object versioning, and lifecycle
management.
Provider settings were verified by testing against a FlashBlade//E
system running Purity//FB 4.5.7.
Documentation and test configurations are included.
Integration test results:
```
go test -v -fast-list -remote TestS3FlashBlade:
PASS
ok github.com/rclone/rclone/backend/s3 232.444s
```
We lost a previous documentation fix (#7077) detailing how to restore
single objects from AWS S3 Glacier.
Also make clearer that rclone provides restore functionality natively.
Co-authored-by: danielkrajnik <dan94kra@gmail.com>
Before this change attempting to download a file with
`Content-Encoding: gzip` from Cloudflare R2 gave this error
corrupted on transfer: sizes differ src 0 vs dst 999
This was caused by the SDK v2 overriding our attempt to set
`Accept-Encoding: gzip`.
This fixes the problem by disabling the middleware that does that
overriding.
This will ensure no Content-Md5 headers are sent and ensure ETags are not
interpreted as MD5 sums. X-Amz-Meta-Md5chksum will be set on all objects
whether single or multipart uploaded.
This also sets "no_check_bucket = true".
This is enough to make the integration tests pass, but there are some
limitations as noted in the docs.
See: https://forum.rclone.org/t/support-s3-directory-bucket/47653/
Cloudflare will normally automatically decompress files with
`Content-Encoding: gzip` when downloaded. This is not what AWS S3 does
and it breaks the integration tests.
This fudges the integration tests to upload the test file with
`Cache-Control: no-transform` on Cloudflare R2 and puts a note in the
docs about this problem.
SDK v2 conversion
Changes
- `--s3-sts-endpoint` is no longer supported
- `--s3-use-unsigned-payload` to control use of trailer checksums (needed for non AWS)
- Changes
- Rename `--s3-authkey` to `--auth-key` to get it out of the s3 backend namespace
- Enable `Content-MD5` integrity checks
- Remove locking after code audit
- Documentation
- Factor out documentation into seperate file
- Add Quickstart to docs
- Add Bugs section to docs
- Add experimental tag to docs
- Add rclone provider to s3 backend docs
- Fixes
- Correct quirks in s3 backend
- Change fmt.Printlns into fs.Logs
- Make metadata storage per backend not global
- Log on startup if anonymous access is enabled
- Coding style fixes
- rename fs to vfs to save confusion with the rest of rclone code
- rename db to b for *s3Backend
Fixes#7062