This was caused by using the sftp.File.Read method which resets the
streaming window after each call. Replacing it with sftp.File.WriteTo
and an io.Pipe fixes the problem bringing the speed to the same as the
sftp binary.
This checks the checksum of the streamed encrypted data against the
checksum of the encrypted object returned from the remote and returns
an error if it is different.
When running `rclone mount`, there were 2 signal handlers for `os.Interrupt`.
Those handlers would run concurrently and in some cases cause either unmount or `atexit.Run()` being skipped.
In addition `atexit.Run()` will get called in `resolveExitCode` to ensure cleanup on errors.
* Fix errcheck and golint warnings
* Remove unused constants and fix comments
* Parse error responses properly
* Fix Open with RangeOption
* Fix Move, Copy and DirMove
* Implement DirCacheFlush
* Check interfaces are correct
* Remove debugs and update overview
* Correct feature flags
* Pare replacement characters down to the minimum set
* Add to the integration tests
* Add Mkdir, Rmdir, Purge, Delete, SetModTime, Copy, Move, DirMove
* Update file size after upload
* Add Open seek
* Set private permission for new folder and uploaded file
* Add docs
* Update List function
* Fix UserSessionInfo struct
* Fix socket leaks
* Don’t close resp.Body in Open method
* Get hash when listing files
The official drive APIs seem to have trouble downloading large
documents sometimes.
This commit adds a --drive-alternate-export flag to use an different,
unofficial set of export URLS which seem to download large files OK.
Google cloud storage doesn't normally need retries, however certain
things (eg bucket creation and removal) are rate limited and do
generate 429 errors.
Before this change the integration tests would regularly blow up with
errors from GCS rate limiting bucket creation and removal.
After this change we low level retry all operations using the same
exponential backoff strategy as used in the google drive backend.
Before this change rclone would inefficiently and confusingly read all
the files in the source directory when copy or moving a single file.
This caused confusion for the users to see log messages about files
which weren't part of the sync.
After the change the copy and move commands use the new infrastructure
made for the copyto and moveto command for single file copy and move.
Before this change we would unconditionally set the OSXFUSE options
noappledouble and noapplexattr.
However the noapplexattr options caused problems with copies in the
Finder.
Now the default for noapplexattr is false so we don't add the option
by default and the user can override the defaults using the
--noappledouble and --noapplexattr flags.
Before this change rclone would set the volume name from the
remote:path normally. However this has `:` and `/` in which make it
difficult to use in macOS.
Now rclone will remove the special characters and replace them with
spaces. It also allows the volume name to be set with the --volname
flag.
By default bazil fuse will return ENOTSUPP for these. However if we
return ENOSYS then OSXFUSE (at least) will never call them again
saving round trips though fuse.