This implementation hopefully can handle all error requests from the
onedrive for business authentication.
I have only tested it with the "domain in unmanaged state" error.
This was caused by using the sftp.File.Read method which resets the
streaming window after each call. Replacing it with sftp.File.WriteTo
and an io.Pipe fixes the problem bringing the speed to the same as the
sftp binary.
This checks the checksum of the streamed encrypted data against the
checksum of the encrypted object returned from the remote and returns
an error if it is different.
* Fix errcheck and golint warnings
* Remove unused constants and fix comments
* Parse error responses properly
* Fix Open with RangeOption
* Fix Move, Copy and DirMove
* Implement DirCacheFlush
* Check interfaces are correct
* Remove debugs and update overview
* Correct feature flags
* Pare replacement characters down to the minimum set
* Add to the integration tests
* Add Mkdir, Rmdir, Purge, Delete, SetModTime, Copy, Move, DirMove
* Update file size after upload
* Add Open seek
* Set private permission for new folder and uploaded file
* Add docs
* Update List function
* Fix UserSessionInfo struct
* Fix socket leaks
* Don’t close resp.Body in Open method
* Get hash when listing files
The official drive APIs seem to have trouble downloading large
documents sometimes.
This commit adds a --drive-alternate-export flag to use an different,
unofficial set of export URLS which seem to download large files OK.
Google cloud storage doesn't normally need retries, however certain
things (eg bucket creation and removal) are rate limited and do
generate 429 errors.
Before this change the integration tests would regularly blow up with
errors from GCS rate limiting bucket creation and removal.
After this change we low level retry all operations using the same
exponential backoff strategy as used in the google drive backend.
Before this fix team drives would return the drive quota which is
incorrect and mis-leading.
Team drives don't appear to have an API for reading the bytes used or
the quota so we now return that the quota and usage are unknown.
If md5sum/sha1sum fails we debug what it outputed on stderr and return
an empty hash indicating we didn't have a hash, rather than
hash.ErrUnsupported indicating that we don't support this hash type.
This fixes lots of ERROR messages for sftp and synology NAS which,
while it supports md5sum the SFTP paths and the SSH paths are
different so md5sum doesn't work.
We also stop disabling md5sum/sha1sum on errors since typically Hashes
is only checked at the start of a sync run and isn't expected to
change dynamically.
This enables the use of the SharePoint webdav endpoint provided by
OneDrive for Business or Office365 Education Accounts. It enables
unverified accounts to be accessed with rclone via webdav as it isn't
possible through the normal onedrive backend.
This integrates the https://github.com/hensur/onedrive-cookie-test
package to fetch the required cookies to authorize against the
SharePoint webdav endpoint.
Very large directories can have their sizes returned as floating point
numbers, eg `1.0034576985781e+14` from the box API.
Before this change this would fail to parse as an int64.
This change parses the size as a float64 instead which will be
perfectly accurate for sizes up to 2**56 which is about 9 PB.
It is unknown whether box themselves use a float64 as an intermediate
representation in the API or not - it seems likely.
Fixes#2261
* Implement about for:
* local, crypt, cache, drive, swift, hubic, onedrive, pcloud, dropbox
* Implement `--json` and `---full` flag for `rclone about`
* change About interface to return a Usage structure
* Remove operations.About as it is too thin an interface
* Implement Integration test
Relates to #1138 and #1564
This bug was introduced by the v3 API conversion in 07f20dd1fd.
The problem was that dircache.FindPath doesn't work for the root directory.
This adds an internal error for dircache.FindPath being called with
the root directory. This makes a failing test, which the fix to the
drive backend fixes.
This also improves the DirCache integration test.
These are AWS, Ceph, Dreamhost, IBM COS S3, Minio, Wasabi and Other.
This configures endpoints where known and makes sure config doesn't
appear where it isn't valid where possible.
This introduces a method of making provider specific configuration
within a remote. This is useful particularly in s3.
This commit does the basic configuration in S3 for IBM COS.
Before this change we lowercased the dropbox root directory. This was
likely a leftover from when we used to build a dictionary to translate
the cases of dropbox files. Now with the v2 API we can rely on
dropbox to do that for us, so we no longer need to lowercase the root.
This fixes issues using crypt with name obfuscation on dropbox.
Before this change asynchronous closes in cmount could cause sharing
violations under Windows on Remove which manifest themselves
frequently as test failures.
This change lets the Remove be retried on a sharing violation under
Windows.
Unfortunately multi part upload can't upload zero length files so
bring back the single part upload for zero length files only.
This was broken when we made all uploads multipart uploads.
The sftp library delivers the attributes of the symlink rather than
the object pointed to in directory listings, however when we use Stat
from the library it points to the objects.
Previous to this fix this caused items pointed to by symlinks to be
unusable.
After the fix both symlinked files and directories work as expected.
From testing it appears that CEPH no longer works properly with v2
auth and neither does Dreamhost, so update the docs anc configuration
to recommend v4 auth.
This is a problem when syncing a file which just needed its modtime
set with dropbox which can't set the mod time of a file without
re-uploading it.
Before this change we would delete the file, then the server side move
would fail moving the file to the backup-dir because it no longer
existed.
After this change the destination file is moved to the backup-dir
instead of being deleted and the new file is uploaded.
Fixes#2134
* All remotes now support RangeOption so remove SeekOption
* Correct off by one error as RangeOption arguments are inclusive.
* Use RangeSeek in preference to Seek if available
In a typical rclone copy to a bucket/container based remote, before
this change we were doing a list, followed by a HEAD of the bucket to
check it existed before doing the copy. The fact the list succeeded
means the bucket exists so mark it OK at that point.
Issue #1421
Before this change `rclone move localdir /mnt/different-fs` would
error. Now it falls back to moving individual files, which in turn
falls back to copying individual files across the filesystem boundary.
Because of a bug in the Onedrive API it will sometime report the wrong
size. If the size is wrong other remotes that depend on the size might
fail. To fix this we overwrite the objects size with the real size
from ContentLength header.
This was caused by inconsistent escaping of the URL in the prefix
check, so check the URL links back to the correct host and scheme
instead of the prefix check.
The decoded path check will catch any URLs which are outside of the
root.
This removes the old system of part accounting and replaces it with a
system of popping off the accounting reader and wrapping up new ones
as necessary.
This makes it much easier to carry the context down the chain of
wrapped readers and get the limiting as near as possible to the
output. This makes the accounting more accurate and the bandwidth
limiting smoother.
Fixes#2029 and Fixes#1443
This fixes uploads to existing files for Google Drive introduced by #2007.
Instead of updating the old file a new "Untitled" file would be created
in the root folder.
A Range request can never request 0 bytes however this change was made
to make a clearer signal that the limit means read to the end.
Add test and more documentation and fixup uses
It is unnecessary to notify the node.Parents, because a cahnge event is
generated for all involved files and folders in a move from d1/f1 to
d2/f1. There will be a event for d1, d2 and f1.
Additionally a duplicate notification is resolved when them empty string
is in pathsToClear.
Related to #2006
The purpose of this is to make it easier to maintain and eventually to
allow the rclone backends to be re-used in other projects without
having to use the rclone configuration system.
The new code layout is documented in CONTRIBUTING.