This implements --auth-proxy for serve s3. In addition it:
* add listbuckets tests with and without authProxy
* use auth proxy test framework
* servetest: implement workaround for #7454
* update github.com/rclone/gofakes3 to fix race condition
This commit optimizes the PikPak upload process by pre-fetching the Global
Content Identifier (gcid) from the API server before calculating it locally.
Previously, a gcid required for uploads was calculated locally. This process was
resource-intensive and time-consuming. By first checking for a cached gcid
on the server, we can potentially avoid the local calculation entirely.
This significantly improves upload speed especially for large files.
After merging this commit
56caab2033 b2: Include custom upload headers in large file info
The compile failed as a change had been missed. Should have rebased
before merging!
fix#7824
Statements like rclone copy <somewhere> . will spontaneously miss
if . expands to a path with a Full Width replacement character.
This is due to the incorrect order in which
relative paths and decoding were handled in the original implementation.
The vfs use the hardcoded OS encoding when creating temp file,
but decode it with encoding for the local filesystem (--local-encoding)
when copying it to remote.
This caused failures when the filenames contained special characters.
The hardcoded OS encoding is now used uniformly.
After re-organising the config it became apparent that there was a bug
in the config system which hadn't manifested until now.
This was the default config overriding the main config and was fixed
by noting when the defaults had actually changed.
This also
- move in use options (Opt) from vfsflags to vfscommon
- change os.FileMode to vfscommon.FileMode in parameters
- rework vfscommon.FileMode and add tests
Apparently fmt.Sscanln doesn't parse bool's properly and this isn't
likely to be fixed by the Go team who regard sscanf as a mistake.
This only uses sscan for integers and uses the correct routine for
everything else.
This also implements parsing time.Duration
See: https://github.com/golang/go/issues/43306
The SFTP protocol (and the golang sftp package) internally uses uint32 unix
time for expressing mtime. Hence it is a waste of memory to store it as 24-byte
time.Time data structure in long-lived data structures. So despite that the
golang sftp package uses time.Time as external interface, we can re-encode the
value back to the original format and save memory.
Co-authored-by: Tomasz Melcer <tomasz@melcer.pl>
Previously, the code relied on calling `readMetaData()` after every file move operation.
This introduced an unnecessary API call and potentially impacted performance.
This change removes the redundant `readMetaData()` call, improving efficiency.
Fixes an issue where copied files could not be renamed when using the
`copyto` command. This occurred because the object ID was empty
before calling `readMetaData`. The fix preemptively calls `readMetaData`
to ensure an object ID is available before attempting the rename operation.