Commit Graph

736 Commits

Author SHA1 Message Date
Michał Matczuk
6893ce0bbf s3: do not resize buf on put to memBuf
This is handled by Pool implementation.
2020-04-11 16:35:48 +01:00
Michał Matczuk
399cf18013 s3: use single memory pool
Previously we had a map of pools for different chunk sizes.
In practice the mapping is not very useful and requires a lock.
Pools of size other that ChunkSize can only happen when we have a huge file (over 10k * ChunkSize).
We need to have a bunch of identically sized huge files.
In such case most likely ChunkSize should be increased.

The mapping and its lock is replaced with a single initialised pool for ChunkSize, in other cases pool is allocated and freed on per file basis.
2020-04-11 16:34:05 +01:00
buengese
64b5105edd jottacloud: implement cleanup 2020-04-11 16:42:25 +02:00
buengese
2c2f4a6a05 jottacloud: implement --jottacloud-trashed-only 2020-04-11 16:42:25 +02:00
Jack Anderson
815ae7df45 backend/s3: add SSE-C support for AWS, Ceph, and MinIO 2020-03-31 18:16:45 +01:00
Nick Craig-Wood
ff0a299bfb drive: don't delete files with multiple parents to avoid data loss
Rclone can't safely delete files with multiple parents without
PATCHing the parents list. This can be done, but since multiple
parents are going away to be replaced by drive shortcuts we return an
error for now.

See #4013
2020-03-31 17:28:32 +01:00
Nick Craig-Wood
b5f78cd7b4 b2: ignore directory markers at the root also
See: https://forum.rclone.org/t/issue-with-lsf-r-files-only-first-line-is-blank/15229/
2020-03-31 11:46:17 +01:00
Nick Craig-Wood
ef99ca68aa gcs: ignore directory markers at the root also
See: https://forum.rclone.org/t/issue-with-lsf-r-files-only-first-line-is-blank/15229/
2020-03-31 11:46:10 +01:00
Nick Craig-Wood
a5c2f2c138 s3: ignore directory markers at the root also
See: https://forum.rclone.org/t/issue-with-lsf-r-files-only-first-line-is-blank/15229/
2020-03-31 11:45:52 +01:00
harry
d91a547d59 dropbox: make error insufficient space to be fatal 2020-03-26 16:19:50 +00:00
harry
7d9ca3998e drive: Extend --drive-stop-on-upload-limit to respond to teamDriveFileLimitExceeded.
Fixes #3979
2020-03-26 16:19:50 +00:00
harry
9aa32bc269 onedrive: make error quotaLimitReached to be fatal - Fixes #4089 2020-03-26 16:19:50 +00:00
Nick Craig-Wood
d9c8c47e02 onedrive: add missing drive on config - fixes #4068
Before this change we queries /me/drives for a list of the users
drives and asked the user to choose. Sometimes this does not return
the users main drive for reasons unknown.

After this change we query /me/drives first then /me/drive and add
that to the list of drives if it wasn't already there.
2020-03-24 08:44:10 +00:00
Max Sum
78a9e7440a union: Implement multiple writable remotes 2020-03-21 18:11:24 +00:00
Nick Craig-Wood
472d4799d1 qingstor: make rclone cleanup remove pending multipart uploads older than 24h 2020-03-18 12:49:21 +00:00
Nick Craig-Wood
84caf1e158 qingstor: try harder to cancel failed multipart uploads 2020-03-18 12:49:21 +00:00
greatroar
0f20f23651 cache: move methods used for testing into test file 2020-03-16 18:41:32 +00:00
Lars Lehtonen
a6a2eec392 backend/b2: remove unused largeUpload.clearUploadURL() 2020-03-16 17:11:19 +00:00
Nick Craig-Wood
77e94be280 onedrive: implement --onedrive-server-side-across-configs - fixes #4058 2020-03-15 21:10:23 +00:00
Nick Craig-Wood
dc06973796 s3: use rclone's low level retries instead of AWS SDK to fix listing retries
In 5470d34740 "backend/s3: use low-level-retries as the number
of SDK retries" we switched over to using the AWS SDK low level
retries instead of rclone's low level retry logic.

This had the unfortunate attempt that retrying listings to correct XML
Syntax errors failed on non S3 backends such as CEPH. The AWS SDK was
also retrying the XML Syntax error request which doesn't make sense.

This change turns off the AWS SDK retries in favour of just using
rclone's retry logic.
2020-03-14 18:04:24 +00:00
Nick Craig-Wood
6fdd7149c1 drive: don't overwrite the description on sever side copy
See: https://forum.rclone.org/t/is-there-a-way-to-sync-while-keeping-file-description-on-the-destination/14609
2020-03-12 10:39:00 +00:00
Harry
fdb07f2f89
onedrive: Added maximum chunk size limit warning in the docs
If chunk size is more than 250M (262,144,000 bytes) then API throws the following error:

Microsoft.SharePoint.Client.InvalidClientQueryException: The request message is too big. The server does not allow messages larger than 262144000 bytes.
2020-03-10 15:14:08 +00:00
Joachim Brandon LeBlanc
132ce94139
backend/s3: use the provided size parameter when allocating a new memory pool - fixes #4047 (#4049) 2020-03-09 16:56:21 +00:00
Nick Craig-Wood
a492c0fb0e local: speed up multi thread downloads by using sparse files on Windows
Before this change rclone didn't use sparse files on Windows. This
means that when you downloaded a file with multithread download it
wrote the entire file with zeros first on the first write not at the
start of the file.

This change makes the file be sparse on Windows. Linux/macOS files
were already sparse.
2020-03-09 10:55:52 +00:00
Nick Craig-Wood
dfc7215bf9 drive: fix duplicate items when using --drive-shared-with-me #4018
Before this change shared with me items with multiple parents (ie most
of them that aren't in the root) would appear twice in the directory
listings.

This fixes the problem by doing an early exit for shared with me
items.
2020-03-07 16:46:53 +00:00
Nick Craig-Wood
38e59ebdf3 drive: fix missing files when using --fast-list and --drive-shared-with-me
This bug was introduced here by removing some necessary code detecting
shared with me items at the root with no parents.

4453fa4ba6 "drive: fix --fast-list when using appDataFolder"

This fix reverts that part of the patch.

Fixes #4018
2020-03-07 16:46:53 +00:00
Yves G
5ee24f804f webdav: report full and consistent usage with about
— allow either Used or Available to be ==0 (remote full or empty)
— compute Total if both values are received
2020-03-05 15:10:19 +00:00
Lars Lehtonen
fef2c6bf7a backend/s3: replace deprecated session.New() with session.NewSession() 2020-03-05 11:34:10 +00:00
Robert-André Mauchin
e2e400e63c Use proper import path go.etcd.io/bbolt
Signed-off-by: Robert-André Mauchin <zebob.m@gmail.com>
2020-03-03 12:40:52 +00:00
Nick Craig-Wood
4d8d1e287b googlephotos: fix "concurrent map write" error - fixes #4003
This adds a bit of missed locking around the uploaded info to fix the
concurrent map write.

All the other accesses have locking - this one must have got missed.
2020-03-02 18:12:46 +00:00
Nick Craig-Wood
7d70eb0346 ftp: attempt to work-around pureftp sending spurious 150 messages
pureftpd has a bug where it sends messages like this

```
    150-Accepted data connection\r\n
        Response code: File status okay; about to open data connection (150)
        Response arg: Accepted data connection
    150 32768.0 kbytes to download\r\n
    150 0.014 seconds (measured here), 1665.27 Mbytes per second\r\n
```

The last `150` is treated as a new response - the previous `150` should have been `150-`.

This means that rclone sees the `150 0.014 seconds (measured here),
1665.27 Mbytes per second` as a reply to the next message and reports
it as an error.

This fix ignores that specific message when it is received in the
`Close` method. It dumps the FTP connection after as it is out of
sync.

See: #3984
Fixes #3445
2020-03-01 09:17:51 +00:00
Nick Craig-Wood
7a5a74cecb crypt: clarify that directory_name_encryption depends on filename_encryption
See: https://forum.rclone.org/t/directory-name-encryption-is-set-to-always-false-when-choosing-filename-encryption-off/14600
2020-02-28 16:26:45 +00:00
Nick Craig-Wood
87d856d71b cache: disable race tests until bbolt is fixed
bbolt fails with "unsafe pointer conversion" under the go1.14 race
detector.

Disable race tests until https://github.com/etcd-io/bbolt/issues/187
is fixed.
2020-02-27 08:05:28 +00:00
Nick Craig-Wood
17b4058ee9 mount: constrain to go1.13 or above otherwise bazil.org/fuse fails to compile 2020-02-26 21:26:46 +00:00
Nick Craig-Wood
da5cbc194a ftp: fix lockup on Close failures when using concurrency limit #3984
Before this change if rclone failed to close a file download for some
reason it would leak a concurrency token. When all the tokens were
leaked then rclone would lock up.

This fix returns the concurrency token regardless of the error status.
2020-02-25 14:38:12 +00:00
Nick Craig-Wood
e8eb658ba5 ftp: fix lockup on failed upload when using concurrency limit #3984
Before this change if rclone failed to upload a file for some reason
it would leak a concurrency token. When all the tokens were leaked
then rclone would lock up.

The fix returns the concurrency token regardless of the error state.
2020-02-25 14:38:12 +00:00
Nick Craig-Wood
28f69f25a0 ftp: fix lockup when using concurrency limit on failed connections #3984
Before this change if rclone failed to make an FTP connection for some
reason it would leak a concurrency token. When all the tokens were
leaked then rclone would lock up.

The fix returns the concurrency token if creating the FTP connection
returns an error.
2020-02-25 14:38:12 +00:00
Aleksandar Jankovic
708b967f15 backend/s3: fix multipart abort context
S3 couldn't abort multi-part upload when context is canceled
because canceled context prevents abort request from being sent.
2020-02-25 12:11:32 +01:00
Aleksandar Janković
5470d34740
backend/s3: use low-level-retries as the number of SDK retries
Amazon S3 is built to handle different kinds of workloads.
In rare cases where S3 is not able to scale for whatever reason users
will face status 500 errors.
Main mechanism for handling these errors are retries.
Amount of needed retries varies for each different use case.

This change is making retries for s3 backend configurable by using
--low-level-retries option.
2020-02-24 16:43:44 +01:00
Maciej Zimnoch
ac9cb50fdb backend/s3: use memory pool for buffer allocations
Currently each multipart upload allocated his own buffers, which after
file upload was garbaged. Next files couldn't leverage already allocated
memory which resulted in inefficent memory management. This change
introduces backend memory pool keeping memory chunks which can be
used during object operations.

Fixes #3967
2020-02-24 13:32:32 +01:00
buengese
4a8b548add jottacloud: use RawURLEncoding when decoding base64 encoded login token - fixes #3945 2020-02-22 23:12:56 +01:00
Lars Lehtonen
481c8a40ea backend/azureblob: fmt nit 2020-02-20 15:50:53 +01:00
Lars Lehtonen
25ef3a281b backend/azureblob: remove unused Object.parseTimeString() 2020-02-20 15:50:53 +01:00
Lars Lehtonen
219bd97e8a backend/qingstor: lint fix 2020-02-14 18:11:01 +00:00
Lars Lehtonen
8b14cd24aa backend/qingstor: prune multiUploader.list() 2020-02-14 18:11:01 +00:00
Michał Matczuk
e75c1f70bb backend/s3: Added 500 as retryErrorCode
The error code 500 Internal Error indicates that Amazon S3 is unable to handle the request at that time. The error code 503 Slow Down typically indicates that the requests to the S3 bucket are very high, exceeding the request rates described in Request Rate and Performance Guidelines.

Because Amazon S3 is a distributed service, a very small percentage of 5xx errors are expected during normal use of the service. All requests that return 5xx errors from Amazon S3 can and should be retried, so we recommend that applications making requests to Amazon S3 have a fault-tolerance mechanism to recover from these errors.

https://aws.amazon.com/premiumsupport/knowledge-center/http-5xx-errors-s3/
2020-02-12 11:43:18 +00:00
Michał Matczuk
19a4d74ee7 backend/s3: Fail fast multipart upload
When a part upload request fails error is returned and gCtx is cancelled.
This does not prevent from other parts being tried.
They immediately fail due to a canceled context, but are retried by rclone anyway...

Example AWS debug output

```
-----------------------------------------------------
2020/02/11 14:12:17 DEBUG: Retrying Request s3/UploadPart, attempt 4
2020/02/11 14:12:17 DEBUG: Request s3/UploadPart Details:
---[ REQUEST POST-SIGN ]-----------------------------
PUT /backuptest-rclone/huge/file.db?partNumber=11&uploadId=190939b4-3c43-4b98-ac11-92303e3f11b0 HTTP/1.1
Host: 192.168.100.99:9000
User-Agent: aws-sdk-go/1.23.8 (go1.13.1; linux; amd64)
Content-Length: 5242880
Authorization: AWS4-HMAC-SHA256 Credential=miniouser/20200211/us-east-1/s3/aws4_request, SignedHeaders=content-length;content-md5;expect;host;x-amz-content-sha256;x-amz-date, Signature=3fc03a01f651cec09b05290459e9ceb26db9a8aa00c4e1b16e8cf5617eb81da8
Content-Md5: XzY+DlipXwbL6bvGYsXftg==
Expect: 100-Continue
X-Amz-Content-Sha256: c036cbb7553a909f8b8877d4461924307f27ecb66cff928eeeafd569c3887e29
X-Amz-Date: 20200211T131217Z
Accept-Encoding: gzip

-----------------------------------------------------
http://192.168.100.99:9000/backuptest-rclone/huge/file.db?partNumber=11&uploadId=190939b4-3c43-4b98-ac11-92303e3f11b0
2020/02/11 14:12:17 DEBUG: Response s3/UploadPart Details:
---[ RESPONSE ]--------------------------------------
HTTP/1.1 500 InternalServerError
Content-Length: 0
-----------------------------------------------------
UploadPartWithContext() error InternalError: We encountered an internal error. Please try again
	status code: 500, request id: , host id:

2020/02/11 14:12:18 DEBUG ERROR: Request s3/UploadPart:
---[ REQUEST DUMP ERROR ]-----------------------------
context canceled
------------------------------------------------------
UploadPartWithContext() error RequestCanceled: request context canceled
caused by: context canceled
2020/02/11 14:12:20 DEBUG ERROR: Request s3/UploadPart:
---[ REQUEST DUMP ERROR ]-----------------------------
context canceled
------------------------------------------------------
UploadPartWithContext() error RequestCanceled: request context canceled
caused by: context canceled
2020/02/11 14:12:22 DEBUG ERROR: Request s3/UploadPart:
---[ REQUEST DUMP ERROR ]-----------------------------
context canceled
------------------------------------------------------
UploadPartWithContext() error RequestCanceled: request context canceled
caused by: context canceled
```

This adds a fail fast behaviour in case the context was cancelled.
2020-02-12 11:40:34 +00:00
Lars Lehtonen
3dbcf0af2d
backend/cache: Remove Unused Functions
This removes the unused functions run.writeRemoteRandomBytes() run.writeObjectRandomBytes() run.listPath() Directory.parentRemote() and Persistent.dumpRoot().
2020-02-12 11:23:57 +00:00
Lars Lehtonen
ac60b36e77 backend/premiumizeme: prune unused functions 2020-02-11 12:17:35 +00:00
Nick Craig-Wood
25cfeb2a64 webdav: Fix X-OC-Mtime header for Transip compatibility - fixes #3126 2020-02-10 11:57:12 +00:00