This fixes the go routine leak in the stats accounting
- don't start stats average loop when initializing `StatsInfo`
- stop the loop instead of pausing
- use a context instead of a channel
- move `period` variable in `averageValues` struct
Fixes#8570
As part of the out of memory syncing code, in this commit
0148bd46688c1d1a march: Implement callback based syncing
we changed the syncing method to use a sorted stream of directory
entries.
Unfortunately as part of this change the sort order of files and
directories became undefined.
This meant that if there existed both a file `foo` and a directory
`foo` in the same directory (as is common on object storage systems)
then these could be matched up incorrectly.
They could be matched up correctly like this
- `foo` (directory) - `foo` (directory)
- `foo` (file) - `foo` (file)
Or incorrectly like this (one of many possibilities)
- no match - `foo` (file)
- `foo` (directory) - `foo` (directory)
- `foo` (file) - no match
Just depending on how the input listings were ordered.
This in turn made container based syncing with a duplicated file and
directory name erratic, deleting files when it shouldn't.
This patch ensures that directories always sync before files by adding
a suffix to the sort key depending on whether the entry was a file or
directory.
Lyve Cloud v2 no longer provides a shared S3 endpoint like v1 did. Instead, each customer receives
a unique, reseller-specific endpoint. To reflect this change, the S3 backend now requires users to
manually enter their endpoint when selecting Lyve Cloud as a provider.
Previously, users selected from a list of hardcoded Lyve Cloud v1 endpoints. This was not compatible
with Lyve Cloud v2 accounts and could cause confusion or misconfiguration.
This change:
- Removes outdated pre-defined endpoint selection for Lyve Cloud
- Requires users to provide their own endpoint
- Adds a format example to guide correct usage
Before: Users selected a fixed endpoint from a list (v1 only)
After: Users must input their own endpoint (v2-compatible)
Before this fix multipart server side copies would fail.
This problem was due to an incorrect calculation of the number of
parts to transfer - it calculated 1 part to transfer rather than 0.
Pure Storage FlashBlade is an enterprise object storage platform that
provides S3-compatible APIs. This change adds FlashBlade as a new
provider option in the S3 backend.
Before this change, FlashBlade users had to use the "Other" provider
with manual configuration of various compatibility flags. This often
resulted in suboptimal performance due to conservative default settings.
After this change, users can select the "FlashBlade" S3 provider and
get an optimal configuration:
- ListObjectsV2 enabled for better performance
- AWS-compatible multipart ETags for reliable transfers
- Proper handling of "AlreadyOwnedByYou" bucket creation responses
- Path-style URLs by default (virtual-host style with DNS setup)
- Unsigned payloads to ensure compatibility with all rclone features
FlashBlade supports modern S3 features including trailer checksum
algorithms (SHA256, CRC32, CRC32C), object versioning, and lifecycle
management.
Provider settings were verified by testing against a FlashBlade//E
system running Purity//FB 4.5.7.
Documentation and test configurations are included.
Integration test results:
```
go test -v -fast-list -remote TestS3FlashBlade:
PASS
ok github.com/rclone/rclone/backend/s3 232.444s
```
Update the Gofile backend to use the new direct upload endpoint based on the latest API changes.
The previous implementation used dynamic server selection, but Gofile has simplified their API
to use a single upload endpoint at https://upload.gofile.io/uploadfile.
This change:
- Removes server selection logic and related code
- Simplifies the Fs struct by removing server-related fields
- Updates the upload process to use the direct upload URL
This removes logrus which is not developed any more and replaces it
with the new log/slog from the Go standard library.
It implements its own slog Handler which is backwards compatible with
all of rclone's previous logging modes.
add trailing slash to s3 ListObjectsV2 response because some clients expect a trailing forward slash to distinguish if the returned object is a directory
Fixes#8464
This was removed as part of #1716 to fix rclone uploads taking double
the space.
7f744033d8eae0fa onedrive: Removed upload cutoff and always do session uploads
As far as I can see, two revisions are still being created for single
part uploads so the default for this flag is set to -1, off.
However it may be useful for experimentation.
See: #8545
Before this change, sometimes, perhaps on heavily loaded sharepoint
servers, uploads would sometimes fail with the error:
{"error":{"code":"itemNotFound","message":"The upload session was not found"}}
This retries the upload after a 5 second delay up to --low-level-retries times.
Fixes#8545
As part of changes to the Google Photos APIs the scopes rclone used
for accessing Google photos have been removed.
This commit replaces the scopes with updated ones.
These aren't as powerful as the old scopes - this means rclone will
only be able to download photos it uploaded from March 31, 2025.
To use these new scopes do `rclone reconnect yourgooglephotosremote:`
Fixes#8434
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
Fixed the anchor link in the documentation that points to the SSL/TLS section.
This change ensures the link directs correctly to the intended section (#tls-ssl) instead of the incorrect #ssl-tls.
No functional code changes, documentation only.
- Add Andrew Kreimer to contributors
- Add Christian Richter to contributors
- Add Ed Craig-Wood to contributors
- Add Klaas Freitag to contributors
- Add Ralf Haferkamp to contributors
This change adds a new vendor called "infinitescale" to the webdav
backend. It enables the ownCloud Infinite Scale
https://github.com/owncloud/ocis project and implements its specific
chunked uploader following the tus protocol https://tus.io
Signed-off-by: Christian Richter <crichter@owncloud.com>
Co-authored-by: Klaas Freitag <klaas.freitag@kiteworks.com>
Co-authored-by: Christian Richter <crichter@owncloud.com>
Co-authored-by: Christian Richter <1058116+dragonchaser@users.noreply.github.com>
Co-authored-by: Ralf Haferkamp <r.haferkamp@opencloud.eu>