When copying Google Docs to Backblaze B2 errors like this would happen
ERROR : test.docx: Failed to calculate src hash: hash type not supported
ERROR : test.docx: corrupted on transfer: sha1 hashes differ src
This was due to an oversight in
8fd66daab6 drive: add support of SHA-1 and SHA-256 checksum
Which omitted to change the base object (which includes Google Docs) so
that it supported SHA-1 and SHA-256.
Cloudflare will normally automatically decompress files with
`Content-Encoding: gzip` when downloaded. This is not what AWS S3 does
and it breaks the integration tests.
This fudges the integration tests to upload the test file with
`Cache-Control: no-transform` on Cloudflare R2 and puts a note in the
docs about this problem.
SDK v2 conversion
Changes
- `--s3-sts-endpoint` is no longer supported
- `--s3-use-unsigned-payload` to control use of trailer checksums (needed for non AWS)
Pikpak can accelerate file uploads by leveraging existing content
in its storage (identified by a custom hash called gcid).
Previously, file transfer statistics were incorrect for uploads
without outbound traffic as the input stream remained unchanged.
This commit addresses the issue by:
* Removing unnecessary unwrapping/wrapping of accountings
before/after gcid calculation, leading immediate AccountRead() on buffering.
* Correctly tracking file transfer statistics for uploads
with no incoming/outgoing traffic by marking them as Server Side Copies.
This change ensures correct statistics tracking and improves overall user experience.
This commit optimizes the PikPak upload process by pre-fetching the Global
Content Identifier (gcid) from the API server before calculating it locally.
Previously, a gcid required for uploads was calculated locally. This process was
resource-intensive and time-consuming. By first checking for a cached gcid
on the server, we can potentially avoid the local calculation entirely.
This significantly improves upload speed especially for large files.
fix#7824
Statements like rclone copy <somewhere> . will spontaneously miss
if . expands to a path with a Full Width replacement character.
This is due to the incorrect order in which
relative paths and decoding were handled in the original implementation.
This also
- move in use options (Opt) from vfsflags to vfscommon
- change os.FileMode to vfscommon.FileMode in parameters
- rework vfscommon.FileMode and add tests
The SFTP protocol (and the golang sftp package) internally uses uint32 unix
time for expressing mtime. Hence it is a waste of memory to store it as 24-byte
time.Time data structure in long-lived data structures. So despite that the
golang sftp package uses time.Time as external interface, we can re-encode the
value back to the original format and save memory.
Co-authored-by: Tomasz Melcer <tomasz@melcer.pl>
Previously, the code relied on calling `readMetaData()` after every file move operation.
This introduced an unnecessary API call and potentially impacted performance.
This change removes the redundant `readMetaData()` call, improving efficiency.
Fixes an issue where copied files could not be renamed when using the
`copyto` command. This occurred because the object ID was empty
before calling `readMetaData`. The fix preemptively calls `readMetaData`
to ensure an object ID is available before attempting the rename operation.
Ceph's Swift API emulation does not fully confirm to the API spec.
As a result, it sometimes returns fewer items in a container than
the requested limit, which according to the spec should means
that there are no more objects left in the container. (Note that
python-swiftclient always fetches unless the current page is empty.)
This commit adds a pair of new Swift backend settings to handle this.
Set `fetch_until_empty_page` to true to always fetch another
page of the container listing unless there are no items left.
Alternatively, set `partial_page_fetch_threshold` to an integer
percentage. In this case rclone will fetch a new page only when
the current page is within this percentage of the limit.
Swift API reference: https://docs.openstack.org/swift/latest/api/pagination.html
PR against ncw/swift with research and discussion: https://github.com/ncw/swift/pull/167Fixes#7924
Similar to uploads implemented in commit ce5024bf33,
this change ensures most asynchronous file operations (copy, move, delete,
purge, and cleanup) complete before proceeding with subsequent actions.
This reduces the risk of data inconsistencies and improves overall reliability.
When getting an object by specifying a versionId in the request, if the
specified version is a delete marker, it returns 405 (Method Not Allowed),
instead of 404 (Not Found) which would be returned without a versionId. See
https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeleteMarker.html
Before this change, we were only looking for 404 (and not 405) to determine
whether the object exists. This meant that in some circumstances (ex. when
Versioning is enabled for the bucket and we have a non-null X-Amz-Version-Id), we
deemed the object to exist when we should not have.
After this change, 405 (Method Not Allowed) is treated the same as 404 (Not
Found) for the purposes of headObject.
See https://forum.rclone.org/t/bisync-rename-failed-method-not-allowed/45723/13
Previously, the fixed 10MB chunk size could lead to exceeding the maximum
allowed number of parts for very large files. Similar to other backends, options for
chunk size and upload concurrency are now user-configurable. Additionally,
the internal library is used to automatically adjust chunk size to prevent exceeding
upload part limitations.
Fixes#7850
This lets you, for example, use shared folders without mounting them
into your home namespace first, as long as you know their namespace ID.
(The --dropbox-shared-folders flag could thus be changed to not need to
mount the shared folder first, but I'm not doing that here as it's a
behavior change, who knows, maybe somebody relies on it.)
`remote` has been converted ToStandardPath a few lines above, so `directory`
needs to be converted the same way in order to be compared properly. This was
spotted on `TestBisyncRemoteRemote/extended_filenames` for
`TestS3,directory_markers:` and `TestGoogleCloudStorage,directory_markers:`
which tripped over a directory name containing a Line Feed symbol.
This attempts to resolve upload conflicts by implementing cancel/cleanup on failed
uploads
* fix panic error on defer cancel upload
* increase pacer min sleep from 10 to 100 ms
* stop using uploadByForm()
* introduce force sleep before and after async tasks
* use pacer's retry scheme instead of manual implementation
Fixes#7787
Before this change some of the integration tests were producing this error
panic: runtime error: invalid memory address or nil pointer dereference
This was caused by an `fs.Object` of which the type (`*Object`) was
not `nil`, but the value within was `nil`. These do not compare as
`nil` leading to the panic.
This is a classic Go gotcha: https://go.dev/doc/faq#nil_error
This was easily fixed by changing the type of one function to return
fs.Object instead of *Object.
Before this change we waited a minimum of 10ms between API calls for
mailru.
The tests no longer pass at this rate, so this increases the time to
100ms.
See #7768
For example using
--onedrive-metadata-permissions read,write,failok
Will allow permissions to be read and written but if the writing
fails, then only an ERROR will be written in the log and the transfer
won't fail.
For example using
--drive-metadata-permissions read,write,failok
Will allow metadata to be read and written but if the writing fails,
then only an ERROR will be written in the log and the transfer won't
fail.
Before this change, cache.PinUntilFinalized was called twice if the root pointed
to a composite multi-chunk file without metadata, resulting in a fatal "finalizer
already set" error. This change fixes the issue.
This change adds support for "group" identities, and SharePoint variants
"siteUser" and "siteGroup". It also adds support for using any identity type
(including "application" and "device") as a recipient source when adding
permissions.
Before this change, metadata permissions used the `grantedTo` and
`grantedToIdentities` properties, which are deprecated on OneDrive Business in
favor of `grantedToV2` and `grantedToIdentitiesV2`. After this change, OneDrive
Business uses the new V2 versions, while OneDrive Personal still uses the
originals, as the V2 versions are not available for OneDrive Personal. (see
https://learn.microsoft.com/en-us/answers/questions/1079737/inconsistency-between-grantedtov2-and-grantedto-re)
Previously, `getFile()` was called indiscriminately during uploads, moves,
and download link generation. This could lead to users with download limit
causing subsequent operations like uploads and moves to fail.
This PR optimizes the use of getFile(), by only calling it
when it's strictly necessary.
This switches between storing chunks in a separate container suffixed
with `_segments` (the default) and a directory in the root
`.file-segments`)
By default the `.file-segments` mode will be auto selected if
`auth_url`s that require it are detected.
If the `.file-segments` mode is in use then rclone will omit that
directory from listings.
See: https://forum.rclone.org/t/blomp-unable-to-upload-5gb-files/42498/
Before this change when setting permissions from the metadata rclone
would stop on the first error.
This change causes rclone to attempt to set all the permissions and
return an error summary at the end.
Before this change, chunker would erroneously consider two different paths to be
equal if, due to special characters, they normalized to equal-folding strings in
Standard Encoding, but not otherwise. This caused base objects to get moved when
they should not have been. This change fixes the issue, which was discovered on
the bisync integration tests.
Ideally it should also be fixed when the base Fs is non-local, but there's not an
easy way at the moment to reference the wrapped Fs's encoding, at least without
breaking encapsulation.
Before this change, calling NewFs on a composite multi-chunk file with
--chunker-meta-format "none"
would fail due to f.base pointing to the wrong Fs. This change fixes the issue,
which was discovered on the bisync integration tests.
Before this change when setting permissions from the metadata rclone
would stop on the first error.
This change causes rclone to attempt to set all the permissions and
return an error summary at the end.
Before this change, calling SetModTime on owncloud and nextcloud would
inadvertently erase the object's stored hashes. This change fixes the issue,
which was discovered by the bisync integration tests.
In this commit we merged an unreliable test
e053c8a1c0 copy: fix nil pointer dereference when corrupted on transfer with nil dst
It is a good idea but very hard to implement so it always works.
Hence this disables it for the moment.
Before this change, the --metadata-mapper was called twice if an object was
uploaded via multipart upload with --metadata and --onedrive-metadata-permissions
"write" or "read,write". This change fixes the issue.