Commit Graph

917 Commits

Author SHA1 Message Date
Nick Craig-Wood
78ca08ba8a pcloud: fix initial config "Auth state doesn't match" message #4210
pCloud should be passing back the state parameter that rclone passed
in on config but it seems to have got lost somewhere.

This sets a work-around for the pCloud backend allowing an empty state
parameter.

See: https://forum.rclone.org/t/cannot-connect-to-pcloud/16592
See: https://forum.rclone.org/t/cannot-create-pcloud-config-file-on-osx/16583
2020-05-26 11:27:01 +01:00
Nick Craig-Wood
49ba4eeb86 oauthutil: tidy interface to Config to add Options struct
The interface was getting so that a new function was needed for every
Config variant. Adding an Options struct fixes this.
2020-05-26 11:27:01 +01:00
Nick Craig-Wood
c08617c70f box: Calculate Free amount in About call 2020-05-25 16:47:34 +01:00
Martin Michlmayr
041b201abd doc: fix typos throughout docs and code 2020-05-25 11:23:58 +01:00
Nick Craig-Wood
9db8ecbc32 box: implement About to read size used - fixes #4264 2020-05-23 18:46:44 +01:00
Martin Michlmayr
a36ef8582f doc: use consistent capitalization 2020-05-20 15:54:51 +01:00
Martin Michlmayr
f34a40a709 swift: fix cosmetic issue in error message 2020-05-20 15:54:51 +01:00
Martin Michlmayr
4aee962233 doc: fix typos throughout docs and code 2020-05-20 15:54:51 +01:00
Fred
5f71d186b2 seafile: implement 2FA 2020-05-20 15:46:35 +01:00
Nick Craig-Wood
cf5d0f5c1f Revert "drive: server side copy docs use default description if empty"
This reverts commit 9e4b68a364.

This does not work as intended - it only changes docs files and to
make it change drive files would take an extra roundtrip.

I think the sematics of server side copy are now correct - additional
features should be added with a new flag.

See #4230
2020-05-19 16:48:02 +01:00
Nick Craig-Wood
bdafbad61e cache: fix tests writing to empty path
This meant the tests were writing to the current directory instead of
a temporary directory.
2020-05-19 16:01:35 +01:00
Brandon McNama
19ff7c9302 cache: Fix Server Side Copy with Temp Upload
When wrapping a backend that supports Server Side Copy (e.g. `b2`, `s3`)
and configuring the `tmp_upload_path` option, the `cache` backend would
erroneously report that Server Side Copy/Move was not supported, causing
operations such as file moves to fail. This change fixes this issue
under these circumstances such that Server Side Copy will now be used
when the wrapped backend supports it.

Fixes #3206
2020-05-19 12:17:40 +01:00
Martin Michlmayr
fb169a8b54
doc: fix typos throughout docs 2020-05-19 12:02:44 +01:00
calisro
bcbfad1482
sft[: added --sftp-pem-key to support inline key files 2020-05-19 11:55:38 +01:00
Nick Craig-Wood
610f40f700 local: implement --local-no-sparse flag for disabling sparse files #2469
This also introduces a one time warning for sparse files and updates
the docs to warn about them.
2020-05-19 10:16:43 +01:00
Brandon Philips
633f50cd3e
googlephotos: create feature/favorites directory - Fixes #4189
Enable access “Favorite” images on Google Photos backend.

This adds a “feature/favorites” folder in the Google Photos backend
and uses the Feature Filter API:

https://developers.google.com/photos/library/reference/rest/v1/mediaItems/search#Filters
2020-05-18 17:55:16 +01:00
Nick Craig-Wood
e4f1e19127 sftp: fix post transfer copies failing with 0 size when using set_modtime=false
Before this change we early exited the SetModTime call which means we
skipped reading the info about the file.

This change reads info about the file in the SetModTime call even if
we are skipping setting the modtime.

See: https://forum.rclone.org/t/sftp-and-set-modtime-false-error/16362
2020-05-14 17:30:01 +01:00
Nick Craig-Wood
4a1b644bfb azureblob: implement streaming of unknown sized files
See: https://forum.rclone.org/t/rclone-rcat-azure-blob-container-sas-token-403-error/16286/3
2020-05-14 11:56:15 +01:00
Nick Craig-Wood
8c9c86c3d6 putio: fix parsing of remotes with leading and trailing /
See: https://forum.rclone.org/t/unable-to-copy-from-remote-but-mount-works/16351/
2020-05-14 11:52:43 +01:00
Nick Craig-Wood
8a58e0235d s3: don't leak memory or tokens in edge cases for multipart upload 2020-05-14 07:48:18 +01:00
Nick Craig-Wood
52b7337d28 crypt: change backend encode/decode to output a plain list
This commit changes the output of the rclone backend encode crypt: and
decode commands to output a plain list of decoded or encoded file
names.

This makes the command much more useful for command line scripting.
2020-05-13 18:11:45 +01:00
Max Sum
33d9310c49
union: enable ListR when upstreams contain local
Enable fast list functions for union backend when:

- at least one of the upstreams supports fast list
- upstreams only consist of backends that support fast list and local backend.

Fixes #3000
2020-05-13 13:10:35 +01:00
Nick Craig-Wood
9e4b68a364 drive: server side copy docs use default description if empty
When server side copying Google docs files we attempt to preserve the
description.

This patch makes it so that we use the default description if the
original description was empty.

See: 6fdd7149c1 (commitcomment-38008638)
2020-05-13 12:31:37 +01:00
Nick Craig-Wood
d342f9f942 azureblob: fix permission error on SAS URL limited to container
Before this change, for some operations, eg rcat or copyto (of a file)
rclone would attempt to create the container when using a SAS URL
limited to a container.

After this change we assume the container does not need creating when
using a container SAS URL.

See: https://forum.rclone.org/t/rclone-rcat-azure-blob-container-sas-token-403-error/16286
2020-05-13 09:11:51 +01:00
Nick Craig-Wood
8ddb3fbb2e drive: fix using list recursive on shortcuts to directories 2020-05-12 17:08:05 +01:00
Nick Craig-Wood
b91e01fd22 drive: strip trailing slashes in shortcut command #4098
This also fixes typo in the name of the function, and allows making
shortcuts from the root directory which are useful in cross drive
shortcut creation.

This also adds a basic suite of tests for creating listing, removing
shortcuts.
2020-05-12 17:08:05 +01:00
Caleb Case
0ce662faad Tardigrade Backend 2020-05-12 15:56:50 +00:00
Max Sum
54b16bd054 union: implement ListR 2020-05-10 17:57:03 +00:00
Max Sum
f21e97001b union: fix server-side copy 2020-05-10 17:56:18 +00:00
Nick Craig-Wood
bb65974e2f drive: implement backend shortcut command for creating shortcuts #4098 2020-05-09 15:16:15 +01:00
Nick Craig-Wood
bc0f487369 drive: look for dirs as well as files on NewObject
This means that we can return ErrorNotAFile when there is an object
with the same name as a directory rather than potentially creating a
duplicate name.
2020-05-09 15:16:15 +01:00
Fred
c754e89906 seafile: New backend for seafile server 2020-05-06 17:33:22 +00:00
Nick Craig-Wood
afde340c9e gcs: fix --header-upload - #59
Before this code we were settig the headers on the PUT request. However this isn't where GCS needs them.

After this fix we set the headers in the object upload request itself.

This means that we only support a limited range of headers

- Cache-Control
- Content-Disposition
- Content-Encoding
- Content-Language
- Content-Type
- X-Goog-Meta-

Note for the last of those are for setting custom metadata in the form
"X-Goog-Meta-Key: value".
2020-05-06 17:34:23 +01:00
Anagh Kumar Baranwal
a86196a156 drive: Added command to change service_account_file and chunk_size
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2020-05-04 16:23:33 +00:00
Anagh Kumar Baranwal
856c2b565f crypt: Added decode/encode commands to replicate functionality of cryptdecode
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2020-05-04 16:23:33 +00:00
Nick Craig-Wood
14cab0fff0 local: fix "file not found" errors on post transfer Hash calculation
Before this change the local backend was returning file not found
errors for post transfer hashes for files which were moved. This was
caused by the routine which checks for the object being changed.

After this change we ignore file not found errors while checking to
see if the object has changed. If the hash has to be computed then a
file not found error will be thrown when it is opened, otherwise the
cached hash will be returned.
2020-05-04 12:17:46 +01:00
Nick Craig-Wood
f2b1fedc4f drive: follow shortcuts by default, skip with --drive-skip-shortcuts
Before this change rclone would skip all shortcuts with a message

    Ignoring unknown document type "application/vnd.google-apps.shortcut"

After this message rclone resolves the shortcuts by default to the
actual files that they point to. See the docs for more info.

The --drive-skip-shortcuts flag can be used to skip shortcuts.
2020-05-02 18:28:38 +01:00
Nick Craig-Wood
b52a39a84e drive: fix merge breakage
In 2f5a2d3c48 an incorrect merge caused compilation to fail
2020-05-01 13:02:32 +01:00
Nick Craig-Wood
2f5a2d3c48 drive: Don't return nil Object with nil error from newObject* functions.
Before this change the newObject* functions could return object=nil
with err=nil.  The result of these functions are passed outside of the
backend code (eg in Copy, Move) and returning a nil object with a nil
error leads to crashes elsewhere as it breaks expectations.

After this change we return (nil, fs.ErrorObjectNotFound) in these
cases. The one place this is actually needd internally (when turning
items into listings) we detect that error and use it to mean skip the
directory item.

This problem was noticed while testing the shortcuts code. It
shouldn't happen normally but it is conceivable it could.
2020-04-30 17:11:36 +01:00
Nick Craig-Wood
74d9dabdff b2: force the case of the SHA1 to lowercase - fixes #4162
Apparently some tools (eg duplicati) upload the SHA1 in uppercase to
b2 to be stored in the `large_file_sha1` metadata. This patch forces
it to lower case.
2020-04-29 17:08:21 +01:00
Nick Craig-Wood
90d738b561 cache: implement rclone backend stats command 2020-04-29 10:10:57 +01:00
Nick Craig-Wood
e2916f3a55 local: implement backend command "noop" for testing purposes 2020-04-29 10:10:57 +01:00
Nick Craig-Wood
37a53570d4 azureblob: implement memory pooling to control memory use
This commit implements memory pooling to control excessive memory use
as was implemented in the s3 backend.
2020-04-28 17:47:10 +01:00
Nick Craig-Wood
ee7219aa20 azureblob: add --azureblob-disable-checksum flag 2020-04-28 17:47:10 +01:00
Nick Craig-Wood
b1d8da484b azureblob: retry InvalidBlobOrBlock error as it may indicate block concurrency problems
According to Microsoft support this error can be caused by

> A timing/concurrency issue where the PUT operations are happening
> about the same time for a single blob. The Put Block List operation
> writes a blob by specifying the list of block IDs that make up the
> blob. In order to be written as part of a blob, a block must have
> been successfully written to the server in a prior Put Block
> operation.
>
> Documentation reference:
>
> https://docs.microsoft.com/en-us/rest/api/storageservices/put-block
>
> This error can happen when doing concurrent upload commits after you
> have started the upload but before you commit. In that case, the
> upload fails. The application can retry this error or attempt some
> other recovery action based on the required scenario.

See: https://forum.rclone.org/t/error-while-syncing-with-azure-blob-storage-x-ms-error-code-invalidbloborblock/15561
2020-04-28 17:47:10 +01:00
Nick Craig-Wood
4e869e03f7 s3: improve docs for --s3-disable-checksum 2020-04-28 17:47:10 +01:00
Nick Craig-Wood
52c9647b06 b2: improve docs for --b2-disable-checksum 2020-04-28 17:47:10 +01:00
Nick Craig-Wood
551a829eba googlephotos: don't put an image in error message - fixes #4144
For a certain class of broken or missing image Google Photos puts an
image in the error message.

Before this fix we blindly chucked it into the error message.

After this fix we replace it with some sensible text.
2020-04-28 16:51:47 +01:00
Adam Stroud
8e91f83174 googlecloudstorage: Add ARCHIVE storage class to help 2020-04-27 11:40:21 +01:00
buengese
7f776c64f0 fichier: implement custom pacer to deal with the new rate limiting 2020-04-26 20:38:56 +02:00
David
0c0ed2fe04 box: Remove unnecessary iat from jws claims 2020-04-23 17:52:14 +01:00
Nick Craig-Wood
ab6ed256e5 putio: add support for --header-upload and --header-download #59 2020-04-23 15:55:52 +01:00
Nick Craig-Wood
7c98ecd3ab putio: make downloading files use the rclone http Client
This fixes `--download-header` and these transactions being missed from
`--dump bodies` or `--tpslimit`
2020-04-23 15:48:30 +01:00
Nick Craig-Wood
b502a74cff gcs: add support for --header-upload and --header-download #59 2020-04-23 11:41:57 +01:00
Nick Craig-Wood
8e9c25063a swift: add support for --header-upload and --header-download #59 2020-04-23 11:34:36 +01:00
Tim Gallant
c390fc8100 onedrive: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant
14f6ce1e77 premiumizeme: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant
385542e2f9 sharefile: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant
fc946d0c44 fichier: pass options to rest.Opts for uploadFile 2020-04-23 11:07:21 +01:00
Tim Gallant
854c84d0ca pcloud: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant
90bd0eb44c webdav: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant
3130f870bb sugarsync: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant
51b617f601 yandex: pass options to rest.Opts for upload 2020-04-23 11:07:21 +01:00
Tim Gallant
011ca244b2 jottacloud: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant
9ea1361044 googlephotos: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant
776966e22c opendrive: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant
01cb256b84 box: pass options to rest.Opts for uploadPart 2020-04-23 11:07:21 +01:00
Tim Gallant
0b0163dde2 box: pass options to rest.Opts for upload 2020-04-23 11:07:21 +01:00
Tim Gallant
38123c70eb b2: pass options to rest.Opts for Update 2020-04-23 11:07:21 +01:00
Tim Gallant
5cb7229a16 s3: add support for HTTPOption 2020-04-23 11:07:21 +01:00
Nick Craig-Wood
f8039deb7c s3: fix detection of BucketAlreadyOwnedByYou and BucketAlreadyExists error
This was being silently ignored until this commit

e2bf91452a s3: report errors on bucket creation (mkdir) correctly
2020-04-22 18:14:03 +01:00
Sunil Patra
39319b4858 @Sunil-P
box: Added support for interchangeable root folder for Box backend - #3422
2020-04-22 17:00:13 +01:00
Sunil Patra
4af5c9aed7 pCloud: Added support for interchangeable root folder for pCloud backend - #3957 2020-04-22 16:58:01 +01:00
David Bramwell
8a3c4c6a7b
box: add token renew function for jwt auth - Fixes #4901 2020-04-22 16:53:03 +01:00
Nick Craig-Wood
1648c1a0f3 crypt: calculate hashes for uploads from local disk
Before this change crypt would not calculate hashes for files it was
uploading. This is because, in the general case, they have to be
downloaded, encrypted and hashed which is too resource intensive.

However this causes backends which need the hash first before
uploading (eg s3/b2 when uploading chunked files) not to have a hash
of the file. This causes cryptcheck to complain about missing hashes
on large files uploaded via s3/b2.

This change calculates hashes for the upload if the upload is coming
from a local filesystem. It does this by encrypting and hashing the
local file re-using the code used by cryptcheck. For a local disk this
is not a lot more intensive than calculating the hash.

See: https://forum.rclone.org/t/strange-output-for-cryptcheck/15437
Fixes: #2809
2020-04-22 11:33:48 +01:00
Nick Craig-Wood
44b1a591a8 crypt: get rid of the unused Cipher interface as it obfuscated the code 2020-04-22 11:33:48 +01:00
Nick Craig-Wood
bbb6f94377 fstest: create AssertTimeEqualWithPrecision from CheckTimeEqualWithPrecision 2020-04-22 11:33:00 +01:00
Nick Craig-Wood
cd3c699f28 lib/readers: factor ErrorReader from multiple sources 2020-04-19 15:18:49 +01:00
Nick Craig-Wood
36d2c46bcf local: factor PreAllocate and SetSparse to lib/file 2020-04-19 15:18:49 +01:00
Daven
4c258787b5
googlephotos: make the start year configurable - fixes #3630 2020-04-15 18:08:07 +01:00
Nick Craig-Wood
e2bf91452a s3: report errors on bucket creation (mkdir) correctly
Before this fix errors on bucket creation were being silently
swallowed.

See: https://forum.rclone.org/t/rclone-with-brand-new-aws-account-for-s3/15590
2020-04-15 13:13:13 +01:00
Michał Matczuk
6893ce0bbf s3: do not resize buf on put to memBuf
This is handled by Pool implementation.
2020-04-11 16:35:48 +01:00
Michał Matczuk
399cf18013 s3: use single memory pool
Previously we had a map of pools for different chunk sizes.
In practice the mapping is not very useful and requires a lock.
Pools of size other that ChunkSize can only happen when we have a huge file (over 10k * ChunkSize).
We need to have a bunch of identically sized huge files.
In such case most likely ChunkSize should be increased.

The mapping and its lock is replaced with a single initialised pool for ChunkSize, in other cases pool is allocated and freed on per file basis.
2020-04-11 16:34:05 +01:00
buengese
64b5105edd jottacloud: implement cleanup 2020-04-11 16:42:25 +02:00
buengese
2c2f4a6a05 jottacloud: implement --jottacloud-trashed-only 2020-04-11 16:42:25 +02:00
Jack Anderson
815ae7df45 backend/s3: add SSE-C support for AWS, Ceph, and MinIO 2020-03-31 18:16:45 +01:00
Nick Craig-Wood
ff0a299bfb drive: don't delete files with multiple parents to avoid data loss
Rclone can't safely delete files with multiple parents without
PATCHing the parents list. This can be done, but since multiple
parents are going away to be replaced by drive shortcuts we return an
error for now.

See #4013
2020-03-31 17:28:32 +01:00
Nick Craig-Wood
b5f78cd7b4 b2: ignore directory markers at the root also
See: https://forum.rclone.org/t/issue-with-lsf-r-files-only-first-line-is-blank/15229/
2020-03-31 11:46:17 +01:00
Nick Craig-Wood
ef99ca68aa gcs: ignore directory markers at the root also
See: https://forum.rclone.org/t/issue-with-lsf-r-files-only-first-line-is-blank/15229/
2020-03-31 11:46:10 +01:00
Nick Craig-Wood
a5c2f2c138 s3: ignore directory markers at the root also
See: https://forum.rclone.org/t/issue-with-lsf-r-files-only-first-line-is-blank/15229/
2020-03-31 11:45:52 +01:00
harry
d91a547d59 dropbox: make error insufficient space to be fatal 2020-03-26 16:19:50 +00:00
harry
7d9ca3998e drive: Extend --drive-stop-on-upload-limit to respond to teamDriveFileLimitExceeded.
Fixes #3979
2020-03-26 16:19:50 +00:00
harry
9aa32bc269 onedrive: make error quotaLimitReached to be fatal - Fixes #4089 2020-03-26 16:19:50 +00:00
Nick Craig-Wood
d9c8c47e02 onedrive: add missing drive on config - fixes #4068
Before this change we queries /me/drives for a list of the users
drives and asked the user to choose. Sometimes this does not return
the users main drive for reasons unknown.

After this change we query /me/drives first then /me/drive and add
that to the list of drives if it wasn't already there.
2020-03-24 08:44:10 +00:00
Max Sum
78a9e7440a union: Implement multiple writable remotes 2020-03-21 18:11:24 +00:00
Nick Craig-Wood
472d4799d1 qingstor: make rclone cleanup remove pending multipart uploads older than 24h 2020-03-18 12:49:21 +00:00
Nick Craig-Wood
84caf1e158 qingstor: try harder to cancel failed multipart uploads 2020-03-18 12:49:21 +00:00
greatroar
0f20f23651 cache: move methods used for testing into test file 2020-03-16 18:41:32 +00:00
Lars Lehtonen
a6a2eec392 backend/b2: remove unused largeUpload.clearUploadURL() 2020-03-16 17:11:19 +00:00
Nick Craig-Wood
77e94be280 onedrive: implement --onedrive-server-side-across-configs - fixes #4058 2020-03-15 21:10:23 +00:00
Nick Craig-Wood
dc06973796 s3: use rclone's low level retries instead of AWS SDK to fix listing retries
In 5470d34740 "backend/s3: use low-level-retries as the number
of SDK retries" we switched over to using the AWS SDK low level
retries instead of rclone's low level retry logic.

This had the unfortunate attempt that retrying listings to correct XML
Syntax errors failed on non S3 backends such as CEPH. The AWS SDK was
also retrying the XML Syntax error request which doesn't make sense.

This change turns off the AWS SDK retries in favour of just using
rclone's retry logic.
2020-03-14 18:04:24 +00:00
Nick Craig-Wood
6fdd7149c1 drive: don't overwrite the description on sever side copy
See: https://forum.rclone.org/t/is-there-a-way-to-sync-while-keeping-file-description-on-the-destination/14609
2020-03-12 10:39:00 +00:00
Harry
fdb07f2f89
onedrive: Added maximum chunk size limit warning in the docs
If chunk size is more than 250M (262,144,000 bytes) then API throws the following error:

Microsoft.SharePoint.Client.InvalidClientQueryException: The request message is too big. The server does not allow messages larger than 262144000 bytes.
2020-03-10 15:14:08 +00:00
Joachim Brandon LeBlanc
132ce94139
backend/s3: use the provided size parameter when allocating a new memory pool - fixes #4047 (#4049) 2020-03-09 16:56:21 +00:00
Nick Craig-Wood
a492c0fb0e local: speed up multi thread downloads by using sparse files on Windows
Before this change rclone didn't use sparse files on Windows. This
means that when you downloaded a file with multithread download it
wrote the entire file with zeros first on the first write not at the
start of the file.

This change makes the file be sparse on Windows. Linux/macOS files
were already sparse.
2020-03-09 10:55:52 +00:00
Nick Craig-Wood
dfc7215bf9 drive: fix duplicate items when using --drive-shared-with-me #4018
Before this change shared with me items with multiple parents (ie most
of them that aren't in the root) would appear twice in the directory
listings.

This fixes the problem by doing an early exit for shared with me
items.
2020-03-07 16:46:53 +00:00
Nick Craig-Wood
38e59ebdf3 drive: fix missing files when using --fast-list and --drive-shared-with-me
This bug was introduced here by removing some necessary code detecting
shared with me items at the root with no parents.

4453fa4ba6 "drive: fix --fast-list when using appDataFolder"

This fix reverts that part of the patch.

Fixes #4018
2020-03-07 16:46:53 +00:00
Yves G
5ee24f804f webdav: report full and consistent usage with about
— allow either Used or Available to be ==0 (remote full or empty)
— compute Total if both values are received
2020-03-05 15:10:19 +00:00
Lars Lehtonen
fef2c6bf7a backend/s3: replace deprecated session.New() with session.NewSession() 2020-03-05 11:34:10 +00:00
Robert-André Mauchin
e2e400e63c Use proper import path go.etcd.io/bbolt
Signed-off-by: Robert-André Mauchin <zebob.m@gmail.com>
2020-03-03 12:40:52 +00:00
Nick Craig-Wood
4d8d1e287b googlephotos: fix "concurrent map write" error - fixes #4003
This adds a bit of missed locking around the uploaded info to fix the
concurrent map write.

All the other accesses have locking - this one must have got missed.
2020-03-02 18:12:46 +00:00
Nick Craig-Wood
7d70eb0346 ftp: attempt to work-around pureftp sending spurious 150 messages
pureftpd has a bug where it sends messages like this

```
    150-Accepted data connection\r\n
        Response code: File status okay; about to open data connection (150)
        Response arg: Accepted data connection
    150 32768.0 kbytes to download\r\n
    150 0.014 seconds (measured here), 1665.27 Mbytes per second\r\n
```

The last `150` is treated as a new response - the previous `150` should have been `150-`.

This means that rclone sees the `150 0.014 seconds (measured here),
1665.27 Mbytes per second` as a reply to the next message and reports
it as an error.

This fix ignores that specific message when it is received in the
`Close` method. It dumps the FTP connection after as it is out of
sync.

See: #3984
Fixes #3445
2020-03-01 09:17:51 +00:00
Nick Craig-Wood
7a5a74cecb crypt: clarify that directory_name_encryption depends on filename_encryption
See: https://forum.rclone.org/t/directory-name-encryption-is-set-to-always-false-when-choosing-filename-encryption-off/14600
2020-02-28 16:26:45 +00:00
Nick Craig-Wood
87d856d71b cache: disable race tests until bbolt is fixed
bbolt fails with "unsafe pointer conversion" under the go1.14 race
detector.

Disable race tests until https://github.com/etcd-io/bbolt/issues/187
is fixed.
2020-02-27 08:05:28 +00:00
Nick Craig-Wood
17b4058ee9 mount: constrain to go1.13 or above otherwise bazil.org/fuse fails to compile 2020-02-26 21:26:46 +00:00
Nick Craig-Wood
da5cbc194a ftp: fix lockup on Close failures when using concurrency limit #3984
Before this change if rclone failed to close a file download for some
reason it would leak a concurrency token. When all the tokens were
leaked then rclone would lock up.

This fix returns the concurrency token regardless of the error status.
2020-02-25 14:38:12 +00:00
Nick Craig-Wood
e8eb658ba5 ftp: fix lockup on failed upload when using concurrency limit #3984
Before this change if rclone failed to upload a file for some reason
it would leak a concurrency token. When all the tokens were leaked
then rclone would lock up.

The fix returns the concurrency token regardless of the error state.
2020-02-25 14:38:12 +00:00
Nick Craig-Wood
28f69f25a0 ftp: fix lockup when using concurrency limit on failed connections #3984
Before this change if rclone failed to make an FTP connection for some
reason it would leak a concurrency token. When all the tokens were
leaked then rclone would lock up.

The fix returns the concurrency token if creating the FTP connection
returns an error.
2020-02-25 14:38:12 +00:00
Aleksandar Jankovic
708b967f15 backend/s3: fix multipart abort context
S3 couldn't abort multi-part upload when context is canceled
because canceled context prevents abort request from being sent.
2020-02-25 12:11:32 +01:00
Aleksandar Janković
5470d34740
backend/s3: use low-level-retries as the number of SDK retries
Amazon S3 is built to handle different kinds of workloads.
In rare cases where S3 is not able to scale for whatever reason users
will face status 500 errors.
Main mechanism for handling these errors are retries.
Amount of needed retries varies for each different use case.

This change is making retries for s3 backend configurable by using
--low-level-retries option.
2020-02-24 16:43:44 +01:00
Maciej Zimnoch
ac9cb50fdb backend/s3: use memory pool for buffer allocations
Currently each multipart upload allocated his own buffers, which after
file upload was garbaged. Next files couldn't leverage already allocated
memory which resulted in inefficent memory management. This change
introduces backend memory pool keeping memory chunks which can be
used during object operations.

Fixes #3967
2020-02-24 13:32:32 +01:00
buengese
4a8b548add jottacloud: use RawURLEncoding when decoding base64 encoded login token - fixes #3945 2020-02-22 23:12:56 +01:00
Lars Lehtonen
481c8a40ea backend/azureblob: fmt nit 2020-02-20 15:50:53 +01:00
Lars Lehtonen
25ef3a281b backend/azureblob: remove unused Object.parseTimeString() 2020-02-20 15:50:53 +01:00
Lars Lehtonen
219bd97e8a backend/qingstor: lint fix 2020-02-14 18:11:01 +00:00
Lars Lehtonen
8b14cd24aa backend/qingstor: prune multiUploader.list() 2020-02-14 18:11:01 +00:00
Michał Matczuk
e75c1f70bb backend/s3: Added 500 as retryErrorCode
The error code 500 Internal Error indicates that Amazon S3 is unable to handle the request at that time. The error code 503 Slow Down typically indicates that the requests to the S3 bucket are very high, exceeding the request rates described in Request Rate and Performance Guidelines.

Because Amazon S3 is a distributed service, a very small percentage of 5xx errors are expected during normal use of the service. All requests that return 5xx errors from Amazon S3 can and should be retried, so we recommend that applications making requests to Amazon S3 have a fault-tolerance mechanism to recover from these errors.

https://aws.amazon.com/premiumsupport/knowledge-center/http-5xx-errors-s3/
2020-02-12 11:43:18 +00:00
Michał Matczuk
19a4d74ee7 backend/s3: Fail fast multipart upload
When a part upload request fails error is returned and gCtx is cancelled.
This does not prevent from other parts being tried.
They immediately fail due to a canceled context, but are retried by rclone anyway...

Example AWS debug output

```
-----------------------------------------------------
2020/02/11 14:12:17 DEBUG: Retrying Request s3/UploadPart, attempt 4
2020/02/11 14:12:17 DEBUG: Request s3/UploadPart Details:
---[ REQUEST POST-SIGN ]-----------------------------
PUT /backuptest-rclone/huge/file.db?partNumber=11&uploadId=190939b4-3c43-4b98-ac11-92303e3f11b0 HTTP/1.1
Host: 192.168.100.99:9000
User-Agent: aws-sdk-go/1.23.8 (go1.13.1; linux; amd64)
Content-Length: 5242880
Authorization: AWS4-HMAC-SHA256 Credential=miniouser/20200211/us-east-1/s3/aws4_request, SignedHeaders=content-length;content-md5;expect;host;x-amz-content-sha256;x-amz-date, Signature=3fc03a01f651cec09b05290459e9ceb26db9a8aa00c4e1b16e8cf5617eb81da8
Content-Md5: XzY+DlipXwbL6bvGYsXftg==
Expect: 100-Continue
X-Amz-Content-Sha256: c036cbb7553a909f8b8877d4461924307f27ecb66cff928eeeafd569c3887e29
X-Amz-Date: 20200211T131217Z
Accept-Encoding: gzip

-----------------------------------------------------
http://192.168.100.99:9000/backuptest-rclone/huge/file.db?partNumber=11&uploadId=190939b4-3c43-4b98-ac11-92303e3f11b0
2020/02/11 14:12:17 DEBUG: Response s3/UploadPart Details:
---[ RESPONSE ]--------------------------------------
HTTP/1.1 500 InternalServerError
Content-Length: 0
-----------------------------------------------------
UploadPartWithContext() error InternalError: We encountered an internal error. Please try again
	status code: 500, request id: , host id:

2020/02/11 14:12:18 DEBUG ERROR: Request s3/UploadPart:
---[ REQUEST DUMP ERROR ]-----------------------------
context canceled
------------------------------------------------------
UploadPartWithContext() error RequestCanceled: request context canceled
caused by: context canceled
2020/02/11 14:12:20 DEBUG ERROR: Request s3/UploadPart:
---[ REQUEST DUMP ERROR ]-----------------------------
context canceled
------------------------------------------------------
UploadPartWithContext() error RequestCanceled: request context canceled
caused by: context canceled
2020/02/11 14:12:22 DEBUG ERROR: Request s3/UploadPart:
---[ REQUEST DUMP ERROR ]-----------------------------
context canceled
------------------------------------------------------
UploadPartWithContext() error RequestCanceled: request context canceled
caused by: context canceled
```

This adds a fail fast behaviour in case the context was cancelled.
2020-02-12 11:40:34 +00:00
Lars Lehtonen
3dbcf0af2d
backend/cache: Remove Unused Functions
This removes the unused functions run.writeRemoteRandomBytes() run.writeObjectRandomBytes() run.listPath() Directory.parentRemote() and Persistent.dumpRoot().
2020-02-12 11:23:57 +00:00
Lars Lehtonen
ac60b36e77 backend/premiumizeme: prune unused functions 2020-02-11 12:17:35 +00:00
Nick Craig-Wood
25cfeb2a64 webdav: Fix X-OC-Mtime header for Transip compatibility - fixes #3126 2020-02-10 11:57:12 +00:00
Nick Craig-Wood
90377f5e65 s3: Specify that Minio supports URL encoding in listings
Thanks to @harshavardhana for pointing this out

See #3934 for background
2020-02-09 12:03:20 +00:00
Nick Craig-Wood
3dfa63b85c onedrive: fix occasional 416 errors on multipart uploads
Before this change, when uploading multipart files, onedrive would
sometimes return an unexpected 416 error and rclone would abort the
transfer.

This is usually after a 500 error which caused rclone to do a retry.

This change checks the upload position on a 416 error and works how
much of the current chunk to skip, then retries (or skips) the current
chunk as appropriate.

If the position is before the current chunk or after the current chunk
then rclone will abort the transfer.

See: https://forum.rclone.org/t/fragment-overlap-error-with-encrypted-onedrive/14001

Fixes #3131
2020-02-01 21:15:07 +00:00
Dave Koston
9f99c20232 s3: Add StackPath Object Storage Support 2020-01-31 16:05:44 +00:00
Nick Craig-Wood
97ed8db75d drive: hide dangerous config from the configurator
This hides:

- "use_created_date"
- "use_shared_date"
- "size_as_quota"

from the configurator (rclone config) as they interfere with normal
operations and shouldn't be set in a backend config.

They can still be put in the config file by hand and will still work
as variables, etc.

This adds some more docs to "size_as_quota" also.

Fixes #3912
2020-01-31 10:09:33 +00:00
Motonori IWAMURO
7662f15939
onedrive: add support "Retry-After" header 2020-01-29 12:16:18 +00:00
unbelauscht
151d0a274e swift: Update OVH API endpoint - Fixes #3890 2020-01-24 17:43:00 +00:00
Nick Craig-Wood
e4d2d228bd webdav: add Referer header to fix problems with WAFs - fixes #3868 2020-01-23 15:56:17 +00:00
buengese
9c858c3228 crypt: correctly handle trailing dot 2020-01-22 01:40:04 +01:00
Benjamin Richter
77fa8194f2 onedrive: add Sites.Read.All permission - Fixes #1770 2020-01-20 12:30:19 +00:00
Nick Craig-Wood
24ef00a258 build: implement a framework for starting test servers during tests
Test servers are implemented by docker containers and run real servers
for rclone to test against.
2020-01-18 16:47:37 +00:00
Nick Craig-Wood
00d30ce0d7 opendrive: implement --opendrive-chunk-size #3707 2020-01-18 11:56:01 +00:00
Nick Craig-Wood
db39adeb3e sftp: open files for update write only to fix AWS SFTP interop - fixes #3776 2020-01-18 11:46:56 +00:00
Nick Craig-Wood
ef7ac088c0 operations: make NewOverrideObjectInfo public and factor uses 2020-01-18 11:41:33 +00:00
Nick Craig-Wood
08a3957880 cache: fix fatal error: concurrent map writes - fixes #2378 2020-01-18 11:27:00 +00:00
Nick Craig-Wood
4499b08afc drive: log an ERROR if an incomplete search is returned 2020-01-18 11:22:26 +00:00
Nick Craig-Wood
4b9da601be dropbox: treat insufficient_space errors as non retriable errors
Before this change rclone would keep trying to upload files after
dropbox had signalled it was full.

This change makes the relevant error a non-retriable error.

See: https://forum.rclone.org/t/why-does-a-file-transfer-continue-when-there-is-no-available-storage/13677
2020-01-18 11:10:18 +00:00
Nick Craig-Wood
c789436580 The memory backend
This is a bucket based remote
2020-01-18 10:41:08 +00:00
Nick Craig-Wood
6757244918 drive: use multipart resumable uploads for streaming and uploads in mount
Before this change we used non multipart uploads for files of unknown
size (streaming and uploads in mount).  This is slower and less
reliable and is not recommended by Google for files smaller than 5MB.

After this change we use multipart resumable uploads for all files of
unknown length.  This will use an extra transaction so is less
efficient for files under the chunk size, however the natural
buffering in the operations.Rcat call specified by
`--streaming-upload-cutoff` will overcome this.

See: https://forum.rclone.org/t/upload-behaviour-and-speed-when-using-vfs-cache/9920/
2020-01-17 22:03:10 +00:00
Nick Craig-Wood
bedeaf23af sugarsync: new backend - fixes #622 2020-01-17 17:39:34 +00:00