`remote` has been converted ToStandardPath a few lines above, so `directory`
needs to be converted the same way in order to be compared properly. This was
spotted on `TestBisyncRemoteRemote/extended_filenames` for
`TestS3,directory_markers:` and `TestGoogleCloudStorage,directory_markers:`
which tripped over a directory name containing a Line Feed symbol.
This attempts to resolve upload conflicts by implementing cancel/cleanup on failed
uploads
* fix panic error on defer cancel upload
* increase pacer min sleep from 10 to 100 ms
* stop using uploadByForm()
* introduce force sleep before and after async tasks
* use pacer's retry scheme instead of manual implementation
Fixes#7787
Before this change some of the integration tests were producing this error
panic: runtime error: invalid memory address or nil pointer dereference
This was caused by an `fs.Object` of which the type (`*Object`) was
not `nil`, but the value within was `nil`. These do not compare as
`nil` leading to the panic.
This is a classic Go gotcha: https://go.dev/doc/faq#nil_error
This was easily fixed by changing the type of one function to return
fs.Object instead of *Object.
Before this change we waited a minimum of 10ms between API calls for
mailru.
The tests no longer pass at this rate, so this increases the time to
100ms.
See #7768
Before this fix we attempted to copy metadata to SFTP backends despite
them not being capable of it.
This fixes the problem by making the need to copy metadata explicit
rather than implicit in a value being present or not.
In this commit
6a0a54ab97 operations: fix missing metadata for multipart transfers to local disk
We broke the setting of modification times when doing multipart
transfers from a backend which didn't support metadata to a backend
which did support metadata.
This was fixed by setting the "mtime" in the metadata if it was
missing.
On Windows, this change includes the `FILE_FLAG_BACKUP_SEMANTICS` in
all calls to `CreateFile`.
Adding this flag allows is useful when rclone is running within a
security context that has `SeBackupPrivilege` and/or `SeRestorePrivilege`
token privileges enabled.
Without this flag, rclone cannot properly leverage special security
groups such as Backup Operators who possess the these privileges.
See: https://forum.rclone.org/t/rclone-sebackupprivilege-file-flag-backup-semantics/45339
See: https://github.com/rclone/rclone/pull/7877.
Mentioned the possibility of skipping the symlink for new versions of
git-annex. (Probably deserves a test once the new git-annex trickles
down to CI platforms.)
I stopped trying to explain each config parameter here. Rather, the doc
now shows the user how to ask git-annex to describe config parameters
with `--whatelse`.
The --metadata-mapper was being called twice for files that rclone
needed to stream to disk,
This happened only for:
- files bigger than --upload-streaming-cutoff
- on backends which didn't support PutStream
This also meant that these were being logged as two transfers which
was a little strange.
This fixes the problem by not using operations.Copy to upload the file
once it has been streamed to disk, instead using the Put method on the
backend.
This should have no effect on reliability of the transfers as we retry
Put if possible.
This also tidies up the Rcat function to make the different ways of
uploading the data clearer and make it easy to see that it gets
verified on all those paths.
See #7848
Before this change on files which have unknown length (like Google
Documents) the SrcFsType would be set to "memoryFs".
This change fixes the problem by getting the Copy function to pass the
src Fs into a variant of Rcat.
Fixes#7848
For example using
--onedrive-metadata-permissions read,write,failok
Will allow permissions to be read and written but if the writing
fails, then only an ERROR will be written in the log and the transfer
won't fail.
Before this change, attempting to copy a large google doc while using
the metadata mapper caused a panic. Google doc files use Rcat to
download as they have an unknown size, and when the size of the doc
file got above --streaming-upload-cutoff it used a
object.NewStaticObjectInfo with a `nil` Fs to upload the file which
caused the crash in the metadata mapper code.
This change makes sure that the Fs in object.NewStaticObjectInfo is
never nil, and it returns MemoryFs which is consistent with the Rcat
code when the source is sized below the --streaming-upload-cutoff
threshold.
Fixes#7845
Before this change multipart downloads to the local disk with
--metadata failed to have their metadata set properly.
This was because the OpenWriterAt interface doesn't receive metadata
when creating the object.
This patch fixes the problem by using the recently introduced
Object.SetMetadata method to set the metadata on the object after the
download has completed (when using --metadata). If the backend we are
copying to is using OpenWriterAt but the Object doesn't support
SetMetadata then it will write an ERROR level log but complete
successfully. This should not happen at the moment as only the local
backend supports metadata and OpenWriterAt but it may in the future.
It also adds a test to check metadata is preserved when doing
multipart transfers.
Fixes#7424
For example using
--drive-metadata-permissions read,write,failok
Will allow metadata to be read and written but if the writing fails,
then only an ERROR will be written in the log and the transfer won't
fail.
This enables gitannex end-to-end tests to run on CI. Otherwise, the
version would not match and tests that check the rclone version would
fail like so:
```
=== RUN TestEndToEnd
e2e_test.go:199: Skipping due to rclone version: expected version "v1.67.0-DEV", but got "v1.67.0-beta.7905.220bbe24d.merge"
--- SKIP: TestEndToEnd (0.07s)
```
Issue #7625