Commit Graph

1292 Commits

Author SHA1 Message Date
lluuaapp
35b2ca642c b2: fixed possible crash when accessing Backblaze b2 remote 2021-01-25 17:48:40 +00:00
Nick Craig-Wood
a774f6bfdb qingstor: fix rclone cleanup
This patch changes to using the default page limit for listing
unfinished multpart uploads rather than 1000. 1000 is the maximum
specified in the docs, but setting anything larger than 200 gives an
error.
2021-01-21 17:35:31 +00:00
Nick Craig-Wood
d7cd35e2ca qingstor: fix error propagation in CleanUp
Before this change errors cleaning multiple buckets were passing silently
2021-01-21 17:35:31 +00:00
buengese
eb090d3544 compress: check type assertion in SetTier - fixes #4941 2021-01-20 22:59:14 +00:00
Nick Craig-Wood
0be69018b8 drive: log that emptying the trash can take some time - fixes #4915 2021-01-19 18:09:36 +00:00
Nick Craig-Wood
9b9ab5f3e8 gcs: Fix Entry doesn't belong in directory "" (same as directory) - ignoring
This change allows directory markers to be non-zero in size.

See: https://forum.rclone.org/t/public-gcs-bucket-and-entry-doesnt-belong-in-directory-same-as-directory/21753/
2021-01-19 16:50:37 +00:00
Nick Craig-Wood
072464cbdb gcs: fix anonymous client to use rclone's HTTP client 2021-01-19 16:50:37 +00:00
buengese
45b57822d5 compress: improve testing 2021-01-18 21:42:58 +01:00
buengese
d8984cd37f compress: correctly handle wrapping of remotes without PutStream
Also fixes ObjectInfo wrapping for Hash and Size - fixes #4928
2021-01-18 21:42:58 +01:00
Patrik Nordlén
80e63af470
jottacloud: Add support for Telia Cloud (#4930) 2021-01-17 02:38:57 +01:00
Nick Craig-Wood
cef51d58ac jottacloud: fix token refresh failed: is not a regular file error
Before this change the jottacloud token renewer would run and give the
error:

    Token refresh failed: is not a regular file

This is because the refresh runs on the root and it isn't a file.

This was fixed by ignoring that specific error.

See: https://forum.rclone.org/t/jottacloud-crypt-3-gb-copy-runs-for-a-week-without-completing/21173
2021-01-12 17:09:44 +00:00
Nick Craig-Wood
e0b5a13a13 jottacloud: fix token renewer to fix long uploads
See: https://forum.rclone.org/t/jottacloud-crypt-3-gb-copy-runs-for-a-week-without-completing/21173
2021-01-11 16:44:11 +00:00
Ivan Andreev
35a4de2030 chunker: fix case-insensitive NewObject, test metadata detection #4902
- fix test case FsNewObjectCaseInsensitive (PR #4830)
- continue PR #4917, add comments in metadata detection code
- add warning about metadata detection in user documentation
- change metadata size limits, make room for future development
- hide critical chunker parameters from command line
2021-01-10 22:29:24 +03:00
Ivan Andreev
847625822f chunker: improve detection of incompatible metadata #4917
Before this patch chunker required that there is at least one
data chunk to start checking for a composite object.

Now if chunker finds at least one potential temporary or control
chunk, it marks found files as a suspected composite object.
When later rclone tries a concrete operation on the object,
it performs postponed metadata read and decides: is this a native
composite object, incompatible metadata or just garbage.
2021-01-10 21:55:15 +03:00
Nick Craig-Wood
3877df4e62 s3: update help for --s3-no-check-bucket #4913 2021-01-10 17:54:19 +00:00
Denis Neuling
ec73d2fb9a azure-blob-storage: utilize streaming capabilities - #1614 2021-01-10 17:02:42 +00:00
kice
ef2bfb9718
onedrive: Support addressing site by server-relative URL (#4761) 2021-01-09 03:26:42 +08:00
Alex Chen
78a76b0d29
onedrive: remove % and # from the set of encoded characters (#4909)
onedrive: remove % and # from the set of encoded characters

This fixes #4700, fixes #4184, fixes #2920.
2021-01-08 12:07:17 +00:00
Nick Craig-Wood
e775328523 ftp,sftp: Make --tpslimit apply - fixes #4906 2021-01-08 10:29:57 +00:00
Nick Craig-Wood
d58fdb10db onedrive: enhance link creation with expiry, scope, type and password
This change makes the --expire flag in `rclone link` work.

It also adds the new flags

    --onedrive-link-type
    --onedrive-link-scope
    --onedrive-link-password

See: https://forum.rclone.org/t/create-share-link-within-the-organization-only/21498
2021-01-08 09:22:50 +00:00
Yury Stankevich
71edc75ca6 HDFS (Hadoop Distributed File System) implementation - #42
This includes an HDFS docker image to use with the integration tests.

Co-authored-by: Ivan Andreev <ivandeex@gmail.com>
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2021-01-07 09:48:51 +00:00
Alex Chen
c66b901320
onedrive: (business only) workaround to replace existing file on server-side copy (#4904) 2021-01-06 10:50:37 +08:00
Cnly
00bf40a8ef onedrive: fix server-side copy completely disabled on OneDrive for Business
This fixes a little problem in PR #4903, which is a fix for #4342
2021-01-06 02:57:51 +08:00
Alex Chen
b594cb9430
onedrive: fall back to normal copy if server-side copy unavailable (#4903)
Fixes #4342 by:
* Disabling server-side copy if either drive isn't OneDrive Personal
* Falling back to normal copy if server-side copy fails
2021-01-05 21:26:00 +08:00
Kerry Su
add7a35e55 b2: docs for download_url with private buckets
The current authentication scheme works without creating
a public download endpoint for a private bucket as in the B2 official blog.
On the contrary, if the existing authorization header gets duplicated
in the Cloudflare Workers script, one might receive 401 Unauthorized errors.
2021-01-02 11:33:48 +00:00
Nick Craig-Wood
cb97c2b0d3 azureblob: fix crash on startup
This was introduced by accidental code deletion in

08b9ede217 azureblob: add support for managed identities
2020-12-31 18:39:09 +00:00
buengese
66c3f2f31f new backend: zoho workdrive - fixes #4533 2020-12-30 17:56:08 +00:00
Nick Craig-Wood
a854cb9617 webdav: add "Depth: 0" to GET requests to fix bitrix
See: https://forum.rclone.org/t/bitrix24-de-remote-support/21112/
2020-12-30 10:14:50 +00:00
Nick Craig-Wood
ba51409c3c sftp: implement keyboard interactive authentication - fixes #4177
Some ssh servers are set up with keyboard interactive authentication
which previously the sftp backkend was ignoring.
2020-12-29 19:48:09 +00:00
Nick Craig-Wood
65eee674b9 webdav: fix Open Range requests to fix 4shared mount
Before this change the webdav backend didn't truncate Range requests
to the size of the object. Most webdav providers are OK with this (it
is RFC compliant), but it causes 4shared to return 500 internal error.

Because Range requests are used in mounting, this meant that mounting
didn't work for 4shared.

This change truncates the Range request to the size of the object.

See: https://forum.rclone.org/t/cant-copy-use-files-on-webdav-mount-4shared-that-have-foreign-characters/21334/
2020-12-28 15:45:40 +00:00
Mitsuo Heijo
9ea990d5a2 azureblob: update azure-storage-blob-go to v0.12.0
See https://github.com/Azure/azure-storage-blob-go/blob/master/ChangeLog.md#version-0120
2020-12-28 13:29:38 +00:00
Brad Ackerman
08b9ede217 azureblob: add support for managed identities
Fixes #3213
2020-12-28 13:23:35 +00:00
Nguyễn Hữu Luân
6342499c47
swift: fix deletion of parts of Static Large Object (SLO)
Before this change, deleting SLO objects could leave the parts of the object behind.
2020-12-28 13:21:11 +00:00
Nick Craig-Wood
f347a198f7 azureblob: delete archive tier blobs before update if --azureblob-archive-tier-delete
Before this change, attempting to update an archive tier blob failed
with a 409 error message:

    409 This operation is not permitted on an archived blob.

This change detects if we are overwriting a blob and either generates
the error (if `--azureblob-archive-tier-delete` is not set):

    can't update archive tier blob without --azureblob-archive-tier-delete

Or deletes the blob first before uploading it again (if
`--azureblob-archive-tier-delete` is set).

Fixes #4819
2020-12-28 12:31:24 +00:00
Nick Craig-Wood
f7404f52e7 azureblob: fix crash when listing outside a SAS URL's root - fixes #4851
Before this change if you attempted to list a remote set up with a SAS
URL outside its container then it would crash the Azure SDK.

A check is done to make sure the root is inside the container when
starting the backend which is usually enough, but when two SAS URL
based remotes are mounted in a union, the union backend attempts to
read paths outside the named container. This was causing a mysterious
crash in the Azure SDK.

This fixes the problem by checking to see if the container in the
listing is the one in the SAS URL before listing the directory and
returning directory not found if it isn't.
2020-12-27 15:55:00 +00:00
kelv
9e87f5090f s3: add requester pays option - fixes #301 2020-12-27 15:43:44 +00:00
Nick Craig-Wood
bdc2278a30 alias: fix tests after parsing of ... change #4862
This was broken in ea8d13d841

    fs: Fix parsing of .. when joining remotes
2020-12-21 18:23:16 +00:00
Laurens Janssen
6ab6c8eefa gcs: Storage class object header support - fixes #3043 2020-12-10 20:06:49 +00:00
Nick Craig-Wood
cb16f42075 b2: Make NewObject use less expensive API calls
Before this change when NewObject was called the b2 backend would list
the directory that the object was in in order to find it.

Unfortunately list calls are Class C transactions and cost more.

This patch switches to using HEAD requests instead. These are Class B
transactions. It is then necessary to parse the headers from response
back into the data that we get from the listing. However B2 returns
exactly the same data, just in a different form.

Rclone will use the old directory listing method when looking for
files with versions as these can't be found via a HEAD request.

This change will particularly benefit --files-from, rclone serve
restic but most operations will see some benefit.
2020-12-09 20:00:22 +00:00
James Lim
2fd543c989 azure: add support for service principals - fixes #3230
Before: users can only connect to Azure blob containers using the access keys
from the storage account.

After: users can additionally choose connect to Azure blob containers
using service principals. This uses OAuth2 under the hood to exchange
a client ID and client secret for a short-lived access token.

Ref:
- https://github.com/rclone/rclone/issues/3230
- https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-app?tabs=dotnet#well-known-values-for-authentication-with-azure-ad
- https://docs.microsoft.com/en-us/azure/developer/go/azure-sdk-authorization#available-authentication-types-and-methods
- https://gist.github.com/ItalyPaleAle/ec6498bfa81a96f9ca27a2da6f60a770
2020-12-09 17:52:15 +00:00
Nick Craig-Wood
50cf97fc72 sugarsync: fix NewObject for files that differ in case #4830 2020-12-07 17:38:22 +00:00
Nick Craig-Wood
4acd68188b box: fix NewObject for files that differ in case #4830 2020-12-07 17:38:22 +00:00
Nick Craig-Wood
e073720a8f dropbox: enable short lived access tokens #4792
Starting September 30th, 2021, the Dropbox OAuth flow will no longer
return long-lived access tokens. It will instead return short-lived
access tokens, and optionally return refresh tokens.

This patch adds the token_access_type=offline parameter which causes
dropbox to return short lived tokens now.
2020-12-02 16:50:16 +00:00
buengese
886b3abac1 compress: fix broken tests 2020-12-02 16:30:02 +01:00
Nick Craig-Wood
250f8d9371 drive: allow shortcut resolution and creation to be retried
This was an oversight in the original code - these operations should
always have been retriable.
2020-12-02 15:28:38 +00:00
Anagh Kumar Baranwal
8a429d12cf s3: Added error handling for error code 429 indicating too many requests
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2020-12-01 18:13:31 +00:00
Nick Craig-Wood
584523672c dropbox: test file name length before upload to fix upload loop
Before this change rclone would upload the whole of multipart files
before receiving a message from dropbox that the path was too long.

This change hard codes the 255 rune limit and checks that before
uploading any files.

Fixes #4805
2020-12-01 17:56:36 +00:00
Nick Craig-Wood
a9585efd64 dropbox: make malformed_path errors from too long files not retriable
Before this change, rclone would retry files with filenames that were
too long again and again.

This changed recognises the malformed_path error that is returned and
marks it not to be retried which stops unnecessary retrying of the file.

See #4805
2020-12-01 17:56:36 +00:00
Nick Craig-Wood
f6b1f05e0f dropbox: tidy repeated error message 2020-12-01 17:56:36 +00:00
Nick Craig-Wood
cc8538e0d1 gcs: fix server side copy of large objects - fixes #3724
Before this change rclone was using the copy endpoint to copy large objects.

This can fail for large objects with this error:

    Error 413: Copy spanning locations and/or storage classes could
    not complete within 30 seconds. Please use the Rewrite method

This change makes Copy use the Rewrite method as suggested by the
error message which should be good for any size of copy.
2020-11-30 16:20:30 +00:00
Nick Craig-Wood
3b24a4cada yandex: set Features.WriteMimeType=false as Yandex ignores mime types
Yandex appears to ignore mime types set as part of the PUT request or
as part of a PATCH request.

The docs make no mention of being able to set a mime type, so set
WriteMimeType=false indicating the backend can't set mime types on
uploaded files.
2020-11-29 17:22:43 +00:00
Nick Craig-Wood
135adb426e filefabric: set Features.Read/WriteMimeType as both supported 2020-11-29 17:22:43 +00:00
Nick Craig-Wood
987dac9fe5 fichier: set Features.ReadMimeType=true as Object.MimeType is supported 2020-11-29 17:22:43 +00:00
Nick Craig-Wood
7fde48a805 dropbox: set Features.ReadMimeType=false as Object.MimeType not supported 2020-11-29 17:22:43 +00:00
Nick Craig-Wood
ce9028bb5b chunker: set Features.ReadMimeType=false as Object.MimeType not supported 2020-11-29 17:22:43 +00:00
buengese
52688a63c6 jottacloud: don't erroniously report support for writing mime types - fixes #4817 2020-11-29 18:11:43 +01:00
Nick Craig-Wood
bcbe393af3 sftp: implement Shutdown method 2020-11-27 17:35:01 +00:00
Nick Craig-Wood
47aada16a0 fs: add Shutdown optional method for backends 2020-11-27 17:35:01 +00:00
Nick Craig-Wood
dfadd98969 azureblob,memory,pcloud: fix setting of mime types
Before this change the backend was reading the mime type of the
destination object instead of the source object when uploading.

This changes fixes the problem and introduces an integration test for
it.

See: https://forum.rclone.org/t/is-there-a-way-to-get-rclone-copy-to-preserve-metadata/20682/2
2020-11-27 14:40:05 +00:00
Nick Craig-Wood
9d574c0d63 fshttp: read config from ctx not passed in ConfigInfo #4685 2020-11-26 16:40:12 +00:00
Nick Craig-Wood
2e21c58e6a fs: deglobalise the config #4685
This is done by making fs.Config private and attaching it to the
context instead.

The Config should be obtained with fs.GetConfig and fs.AddConfig
should be used to get a new mutable config that can be changed.
2020-11-26 16:40:12 +00:00
Nick Craig-Wood
979bb07c86 filefabric: Implement the Enterprise File Fabric backend
Missing features
- M-Stream support
- Oauth-like flow (soon being changed to oauth)
2020-11-25 21:11:29 +00:00
Nick Craig-Wood
dfeae0e70a Revert "sharefile: fix backend due to API swapping integers for strings"
The API seems to have reverted to what it was before

This reverts commit 095c7bd801.
2020-11-25 20:52:57 +00:00
Nick Craig-Wood
f43a9ac17e pcloud: only use SHA1 hashes in EU region
Apparently only SHA1 hashes are supported in the EU region for
pcloud. This has been confirmed by pCloud support. The EU regions also
support SHA256 hashes which we don't support yet.

https://forum.rclone.org/t/pcloud-to-local-no-hashes-in-common/19440
2020-11-25 20:46:38 +00:00
Nick Craig-Wood
76ee3060d1 s3: Add MD5 metadata to objects uploaded with SSE-AWS/SSE-C
Before this change, small objects uploaded with SSE-AWS/SSE-C would
not have MD5 sums.

This change adds metadata for these objects in the same way that the
metadata is stored for multipart uploaded objects.

See: #1824 #2827
2020-11-25 12:28:02 +00:00
Nick Craig-Wood
4bb241c435 s3: store md5 in the Object rather than the ETag
This enables us to set the md5 to cache it.

See: #1824 #2827
2020-11-25 12:28:02 +00:00
Nick Craig-Wood
a06f4c2514 s3: fix hashes on small files with aws:kms and sse-c
If rclone is configured for server side encryption - either aws:kms or
sse-c (but not sse-s3) then don't treat the ETags returned on objects
as MD5 hashes.

This fixes being able to upload small files.

Fixes #1824
2020-11-25 12:28:02 +00:00
Nick Craig-Wood
53aa03cc44 s3: complete sse-c implementation
This now can complete all operations with SSE-C enabled.

Fixes #2827
See: https://forum.rclone.org/t/issues-with-aws-s3-sse-c-getting-strange-log-entries-and-errors/20553
2020-11-25 12:28:02 +00:00
Nick Craig-Wood
f0905499e3 random: seed math/rand in one place with crypto strong seed #4783
This shouldn't be read as encouraging the use of math/rand instead of
crypto/rand in security sensitive contexts, rather as a safer default
if that does happen by accident.
2020-11-18 17:48:44 +00:00
Nick Craig-Wood
095c7bd801 sharefile: fix backend due to API swapping integers for strings
For some reason the API started returning some integers as strings in
JSON. This is probably OK in Javascript but it upsets Go.

This is easily fixed with the `json:"name,size"` struct tag.
2020-11-13 14:37:43 +00:00
Nick Craig-Wood
23469c9c7c ftp: add --ftp-disable-msld option to ignore MLSD for really old servers
This is useful for servers which advertise MLSD (eg some versions of
Serv-U) but don't support it properly.

See: https://forum.rclone.org/t/double-folder-names-on-target-destination-paths-ftp/18822
See: https://github.com/jlaffaye/ftp/pull/196
2020-11-13 11:25:34 +00:00
buengese
636fb5344a drive: implement CleanUp workaround for team drives - fixes #2418 2020-11-13 03:30:28 +01:00
buengese
bc4282e49e compress: added experimental compression remote - implements #2098, #1356, #675
This remote implements transparent compression using gzip. Uses JSON as a for storing metadata.

Co-authored-by: id01 <gaviniboom@gmail.com>
2020-11-13 02:31:59 +01:00
Manish Gupta
95d0410baa
local: continue listing files/folders when a circular symlink is detected
Before this change a circular symlink would cause rclone to error out from the listings.

After this change rclone will skip a circular symlink and carry on the listing,
producing an error at the end.

Fixes #4743
2020-11-12 11:32:55 +00:00
Nick Craig-Wood
f7efce594b config: add context.Context #3257 #4685
This add config to the Config callback in the backends and the related
config functions.
2020-11-09 18:05:54 +00:00
Nick Craig-Wood
1fb6ad700f accounting: add context.Context #3257 #4685 2020-11-09 18:05:54 +00:00
Nick Craig-Wood
8b96933e58 fs: Add context to fs.Features.Fill & fs.Features.Mask #3257 #4685 2020-11-09 18:05:54 +00:00
Nick Craig-Wood
d846210978 fs: Add context to NewFs #3257 #4685
This adds a context.Context parameter to NewFs and related calls.

This is necessary as part of reading config from the context -
backends need to be able to read the global config.
2020-11-09 18:05:54 +00:00
Nick Craig-Wood
bedf6e90d2 onedrive: warn on gateway timeout errors
It seems that when doing chunked uploads to onedrive, if the chunks
take more than 3 minutes or so to upload then they may timeout with
error 504 Gateway Timeout.

This change produces an error (just once) suggesting lowering
`--onedrive-chunk-size` or decreasing `--transfers`.

This is easy to replicate with:

    rclone copy -Pvv --bwlimit 0.05M 20M onedrive:20M

See: https://forum.rclone.org/t/default-onedrive-chunk-size-does-not-work/20010/
2020-11-02 16:53:35 +00:00
Nick Craig-Wood
1973fc1ecc azureblob: update lib from v0.10.0 to v0.11.0 and fix API breakage
See: https://github.com/Azure/azure-storage-blob-go/issues/226
2020-10-29 13:34:39 +00:00
Josh Soref
0a6196716c docs: style: avoid double-nesting parens
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2020-10-28 18:16:23 +00:00
Josh Soref
a15f50254a docs: grammar: if, then
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2020-10-28 18:16:23 +00:00
Josh Soref
5d4f77a022 docs: grammar: Oxford comma
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2020-10-28 18:16:23 +00:00
Josh Soref
a089de0964 docs: grammar: uncountable: links
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2020-10-28 18:16:23 +00:00
Josh Soref
3068ae8447 docs: grammar: count agreement: files
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2020-10-28 18:16:23 +00:00
Josh Soref
67ff153b0c docs: grammar: article: a-file
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2020-10-28 18:16:23 +00:00
Josh Soref
3e1cb8302a docs: spelling: etc.
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2020-10-28 18:16:23 +00:00
Josh Soref
e4a87f772f docs: spelling: e.g.
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2020-10-28 18:16:23 +00:00
Josh Soref
d4f38d45a5 docs: spelling: high-speed
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2020-10-28 18:16:23 +00:00
Josh Soref
bbe7eb35f1 docs: spelling: server-side
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2020-10-28 18:16:23 +00:00
edwardxml
87e54f2dde
ftp: update wording for flags
Minor wording change to help for explicit and implicit FTPS flags. More consistent between flags. Add 's' to request because only one 'client' mentioned.
2020-10-28 15:45:52 +00:00
Ivan Andreev
be6a888e50 chunker: skip long local hashing, hash in-transit (fixes #4021)
PR 4614
2020-10-26 20:18:07 +03:00
Ivan Andreev
dad8447423 mailru: avoid prehashing of large local files
PR 4617
2020-10-26 20:16:52 +03:00
Ivan Andreev
65ff109065 mailru: accept special folders eg camera-upload
Fixes #4025
PR 4690
2020-10-26 20:04:31 +03:00
Nick Craig-Wood
cf0bdad5de union: create root directories if none exist
This fixes the TestUnion: integration test if the /tmp/union[123] dirs
don't exist.
2020-10-25 18:10:49 +00:00
albertony
ffdd0719e7 jottacloud: avoid double url escaping of device/mountpoint - fixes #4697 2020-10-20 17:43:49 +02:00
Anagh Kumar Baranwal
5b09599a23 drive: Added flag --drive-stop-on-download-limit to stop transfers when the download limit is exceeded
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2020-10-14 15:51:01 +01:00
Josh Soref
d0888edc0a Spelling fixes
Fix spelling of: above, already, anonymous, associated,
authentication, bandwidth, because, between, blocks, calculate,
candidates, cautious, changelog, cleaner, clipboard, command,
completely, concurrently, considered, constructs, corrupt, current,
daemon, dependencies, deprecated, directory, dispatcher, download,
eligible, ellipsis, encrypter, endpoint, entrieslist, essentially,
existing writers, existing, expires, filesystem, flushing, frequently,
hierarchy, however, implementation, implements, inaccurate,
individually, insensitive, longer, maximum, metadata, modified,
multipart, namedirfirst, nextcloud, obscured, opened, optional,
owncloud, pacific, passphrase, password, permanently, persimmon,
positive, potato, protocol, quota, receiving, recommends, referring,
requires, revisited, satisfied, satisfies, satisfy, semver,
serialized, session, storage, strategies, stringlist, successful,
supported, surprise, temporarily, temporary, transactions, unneeded,
update, uploads, wrapped

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2020-10-14 15:21:31 +01:00
Anagh Kumar Baranwal
fc5b14b620
s3: Added --s3-disable-http2 to disable http/2
Fixes #4673

Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2020-10-13 17:11:22 +01:00
Stephen Harris
bbddadbd04 sftp: remember entered password in AskPass mode
As reported in

  https://github.com/rclone/rclone/issues/4660#issuecomment-705502792

After switching to a password callback function, if the ssh connection
aborts and needs to be reconnected then the user is-reprompted for their
password.  Instead we now remember the password they entered and just give
that back.  We do lose the ability for them to correct mistakes, but that's
the situation from before switching to callbacks.  We keep the benefits
of not asking for passwords until the SSH connection succeeds (right
known_hosts entry, for example).

This required a small refactor of how `f := &Fs{}` was built, so we can
store the saved password in the Fs object
2020-10-13 16:53:11 +01:00
Nick Craig-Wood
7428e47ebc local: fix sizes and syncing with --links option on Windows - fixes #4581
Before this change rclone returned the size from the Stat call of the
link. On Windows this reads as 0 always, however on unix it reads as
the length of the text in the link. This caused errors like this when
syncing:

    Failed to copy: corrupted on transfer: sizes differ 0 vs 13

This change causes Windows platforms to read the link and use that as
the size of the link instead of 0 which fixes the problem.
2020-10-13 16:29:56 +01:00
Dan Hipschman
70f92fd6b3 crypt: small simplification, no functionality change 2020-10-12 17:20:39 +01:00
Nick Craig-Wood
0906f8dd3b onedrive: fix disk usage for sharepoint
Some onedrive sharepoints appear to return all 0s for quota

    "quota":{"deleted":0,"remaining":0,"total":0,"used":0}

This commit detects this and returns unknown for all quota parts.

See: https://forum.rclone.org/t/zero-size-volume-when-mounting-onedrive-sharepoint/19597
2020-10-09 14:11:56 +01:00
buengese
664213cedb jottacloud: remove clientSecret from config when upgrading to token based authentication - #4645 2020-10-08 11:51:17 +02:00
Stephen Harris
9e925becb6 sftp: defer asking for user passwords until the SSH connection succeeds
Issue: 4660
    https://github.com/rclone/rclone/issues/4660

Unexpected side effect: a wrong password allows for the user to retry!
2020-10-07 12:01:17 +01:00
Anagh Kumar Baranwal
e3a5bb9b48 s3: Add missing regions for AWS
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2020-10-06 16:54:42 +01:00
Stephen Harris
6dc28ef50a sftp: Allow user to optionally check server hosts key to add security
Based on Issue 4087
  https://github.com/rclone/rclone/issues/4087

Current behaviour is insecure.  If the user specifies this value then we
switch to validating the server hostkey and so can detect server changes
or MITM-type attacks.
2020-10-06 16:27:42 +01:00
buengese
3edc9ff0b0 jottacloud: remove DirMove workaround as it's not required anymore - also fixes #4655 2020-10-05 20:13:05 +02:00
gyutw
e7fbdac8e0 fichier: increase maximum file size from 100GB to 300GB - fixes #4634 2020-09-28 20:27:17 +02:00
Nick Craig-Wood
41ec712aa9 ftp,sftp: fix docs for usernames
- factor env.CurrentUser out of backend/sftp
- Use env.CurrentUser in ftp and sftp
- fix docs to have correct username
2020-09-27 11:44:05 +01:00
Stephen Harris
17acae2b00 sftp: allow cert based auth via optional pubkey
Discussion at
  https://forum.rclone.org/t/ssh-certificate-based-authentication-does-not-work/19222

Basically we allow the user to specify their own public key cert rather
than letting the SSH client extract the pubkey from the private key.
This allows certificate based authentication to work.
2020-09-27 11:10:13 +01:00
Ivan Andreev
d8239e0194 mailru: remove deprecated protocol quirks 2020-09-26 15:38:32 +03:00
Ivan Andreev
004c3796de chunker: disable ListR to fix missing files on GDrive (workaround #3972) 2020-09-26 15:19:16 +03:00
Ivan Andreev
18c7549770 mailru: fix invalid timestamp on corrupted files (fixes #4229) 2020-09-26 15:12:30 +03:00
Nick Craig-Wood
e5190f14ce drive: implement "rclone backend copyid" command for copying files by ID
This allows files to be copied by ID from google drive. These can be
copied to any rclone remote and if the remote is a google drive then
server side copy will be attempted.

Fixes #3625
2020-09-25 17:53:51 +01:00
buengese
60cc2cba1f sftp: always convert the checksum to lower case - fixes #4518 2020-09-22 03:15:09 +02:00
Ivan Andreev
c797494d88
Merge pull request #4608 from ivandeex/pr-chunker-crypt
chunker: fix upload over crypt (fixes #4570)
2020-09-18 17:58:44 +03:00
Ivan Andreev
8928441466 mailru: fix range requests after june changes on server 2020-09-18 17:56:34 +03:00
Ivan Andreev
0e8965060f mailru: fix uploads after recent changes on server
similar fix: 5efa9958f1
2020-09-18 17:56:34 +03:00
Christopher Stewart
f3cf6fcdd7
s3: fix spelling mistake
Fix spelling mistake "patific" => "pacific"
2020-09-18 12:03:13 +01:00
Muffin King
61fe068c90
seafile: fix accessing libraries > 2GB on 32 bit systems - fixes #4588 2020-09-15 21:55:10 +02:00
Nick Craig-Wood
6a56ac1032 vfs,local: Log an ERROR if we fail to set the file to be sparse
See: https://forum.rclone.org/t/rclone-1-53-release/18880/73
2020-09-11 15:36:47 +01:00
buengese
233bed6a73 dropbox: implement IDer - fixes #2928 2020-09-08 19:04:32 +02:00
buengese
575f061629 dropbox: add support for viewing shared files and folders 2020-09-08 19:02:35 +02:00
Evan Harris
640d7d3b4e opendrive: Do not retry 400 errors
This type of error is unlikely to be an error that can be resolved by a retry,
and is triggered in #2296 by files with a timestamp before the unix epoch.
2020-09-08 17:15:35 +01:00
wjielai
22937e8982
docs: add Tencent COS to s3 provider list - fixes #4468
* add Tencent COS to s3 provider list.

Co-authored-by: wjielai <wjielai@tencent.com>
2020-09-08 16:34:25 +01:00
Tim Gallant
c3884aafd9 drive: adds special oauth help test - fixes #4555 2020-09-07 12:48:46 +01:00
themylogin
57c10babfe drive: Remove --drive-alternate-export in favor of exportLinks
Google engineer confirms that the new official API should works properly:
https://issuetracker.google.com/issues/36761333#comment8
2020-09-02 12:16:25 +01:00
Nick Craig-Wood
725ae91387 s3: reduce the default --s3-copy-cutoff to < 5GB
The maximum value for the --s3--copy-cutoff should be 5GiB as tested
with AWS S3.

However b2 have implemented this as 5GB rather than 5GiB so having the
default at 5 GiB makes the b2s3 server side copy of a large file by
default.

This patch sets the default to 4768 MiB which is slightly less than
5GB.

This should have very little effect on anything.

If in future rclone can lower this limit more if Copy can multithread.

See: https://forum.rclone.org/t/copying-files-within-a-b2-bucket/16680/76
2020-09-01 18:53:29 +01:00
Nick Craig-Wood
b7dd3ce608 s3: preserve metadata when doing multipart copy
Before this change the s3 multipart server side copy was not
preserving the metadata of the object. This was most noticeable
because the modtime was not preserved.

This change fetches the metadata from the object before starting the
copy and overwrites it if requires.

It will also mean any other metadata is preserved.

See: https://forum.rclone.org/t/copying-files-within-a-b2-bucket/16680/70
2020-09-01 18:39:30 +01:00
Nick Craig-Wood
70c8566cb8 fs: Pin created backends until parents are finalized
This attempts to solve the backend lifecycle problem by

- Pinning backends mentioned on the command line into the cache
  indefinitely

- Unpinning backends when the containing structure (VFS, wrapping
  backend) is destroyed

See: https://forum.rclone.org/t/rclone-rc-backend-command-not-working-as-expected/18834
2020-09-01 18:21:03 +01:00
Nick Craig-Wood
0d066bdf46 alias,cache,chunker,crypt: make any created backends be cached to fix rc problems
Before this change, when the above backends created a new backend they
didn't put it into the backend cache.

This meant that rc commands acting on those backends did not work.

This was fixed by making sure the backends use the backend cache.

See: https://forum.rclone.org/t/rclone-rc-backend-command-not-working-as-expected/18834
2020-09-01 18:21:03 +01:00
Nick Craig-Wood
23c826db52 union: fix writing with the all policy - fixes #4534
Before this change writing with the all policy deadlocked while
uploading.

This change fixes the problem by fixing the multi reader, closing the
pipes at the correct time with the correct error. This is factored
into a new function as it was used twice.

This patch also adds a new test which tests the all policies.
2020-09-01 18:21:03 +01:00
Nick Craig-Wood
9cc17cec9a swift: fix missing hash from object returned from upload
Before this fix we were reading the hash from the upload using the
string "ETag", however the go runtime normalises the tag into "Etag"
so we were in fact always reading an empty string.

This bug was introduced in

aeea4430d5 swift: efficiency: slim Object and reduce requests on upload

It was spotted by the integration tests.

The fix was just to use the canonical form "Etag" instead of "ETag".
2020-09-01 16:04:32 +01:00
Nick Craig-Wood
3f0d54daae crypt: fix purge bug introduced by refactor #1891
In this commit

a2afa9aadd fs: Add directory to optional Purge interface

We failed to encrypt the directory name so the Purge failed.

This was spotted by the integration tests.
2020-09-01 15:16:14 +01:00
Aaron Gokaslan
7dcbebf9bc jottacloud: rename unused variable to _ in jottacloud.go 2020-08-31 18:11:36 +01:00
Nick Craig-Wood
068cfdaa00 drive: fix "panic: send on closed channel" when recycling dir entries
In this commit:

cbf3d43561 drive: fix missing items when listing using --fast-list / ListR

We introduced a bug where under specific circumstances it could cause
a "panic: send on closed channel".

This was caused by:

- rclone engaging the workaround from the commit above
- one of the listing routines returning an error
- this caused the `in` channel to be closed to stop the readers
- however the workaround was recycling stuff into the `in` channel at the time
- hence the panic on closed channel

This fix factors out the sending to the `in` channel into `sendJob`
and calls this both from the master go routine and the list
runners. `sendJob` detects the `in` channel being closed properly and
also deals correctly with contention on the `in` channel.

Fixes #4511
2020-08-31 11:41:15 +01:00
Lucas Kanashiro
b30ee57cd9 backend/local/aaaa: remove this unneeded file
This file was introduced as part of f39655093 probably by
mistake. There is no reference for this file in the local
backend directory.

Fixes #4536
2020-08-30 22:35:58 +01:00
Egor Margineanu
921e384c4d
s3: update IBM COS endpoints - fixes #4522 2020-08-30 17:21:11 +01:00
aus
b6d3cad70e sftp: add options for subsystem and server_command - fixes #1801 2020-08-25 21:38:13 +01:00
Nick Craig-Wood
0f7a2f0f3c fichier: Detect Flood detected: IP Locked error and sleep for 30s
This is in an attempt to make the integration tests pass.
2020-08-23 18:01:22 +01:00
Jay McEntire
45afe97e8e
drive: Added --drive-starred-only to only show starred files - fixes #3928 2020-08-21 17:30:41 +01:00
Nick Craig-Wood
fee8f21ce1 pcloud: Add example hostnames to configurator and more docs - Fixes #4493
When using `rclone authorize` the hostname doesn't get set in the
config file.

This commit allows it to be set in the configurator and gives the user
a hint that it needs setting.
2020-08-21 16:14:02 +01:00
Nick Craig-Wood
801a820c54 s3: fix detection of bucket existing
This reverts part of

151f03378f s3: fix upload of single files into buckets without create permission

This erroneously assumed that a HEAD request on a non existent object
would return "NotFound" if the bucket was found. In fact it returns
"NotFound" when the bucket isn't found also.

This will break the fix for #4297 - however that can be made to work
using the new --s3-assume-bucket-exists flag
2020-08-21 13:28:08 +01:00
Nick Craig-Wood
2bcc66c805 drive: fix duplication of Google docs on server side copy #4517
Before this change, rclone was looking for the file without the
extension to see if it existed which meant that it never did.

This change checks the destination file exists firsts, before removing
the extension.
2020-08-20 20:19:33 +01:00
Nick Craig-Wood
b5ba077a2f drive: work around drive bug which didn't set modtime of copied docs
Google drive appears to no longer be copying the modification time of
google docs.

Setting the mod time immediately after the copy doesn't work either,
so this patch copies the object, waits for 1 second and then sets the
modtime.

Fixes #4517
2020-08-20 20:19:33 +01:00
Nick Craig-Wood
0931b84940 pcloud: Fix rclone link for files
This was only working for files in the root directory and wasn't
looking at the encoding.

This is fixed to use NewObject which takes both things into account
and it makes the share by ID instead of by path.

This problem was spotted by the integration tests.
2020-08-20 20:09:55 +01:00
Nick Craig-Wood
85f9bd1abf union: fix tests by looking for fs.ErrorDirNotFound found in Purge and About
Before this change we errored out if one upstream errored in Purge or
About.

This change checks for fs.ErrorDirNotFound and skips that backend in
this case.
2020-08-19 18:04:16 +01:00
Nick Craig-Wood
52247e9a9f local: return fs.ErrorDirNot found from About and Purge
Before this a stat error was returned which wasn't very helpful.
2020-08-19 18:02:21 +01:00
Nick Craig-Wood
3a14b1d5a9 build: make rclone build with wasm
Needed to drop
- azureblob backend
- cache backend
- qingstor backend
- cachestats command
- ncdu command
2020-08-10 17:32:21 +01:00
Tim Gallant
30eb094f28 oauthutil: adds SharedOptions for OAuth backends
1. adds SharedOptions data structure to oauthutil
2. adds config.ConfigToken option to oauthutil.SharedOptions
3. updates the backends that have oauth functionality

Fixes #2849
2020-08-07 16:32:01 +01:00
Nick Craig-Wood
b401a727f7 onedrive: add --onedrive-no-versions flag to remove old versions - fixes #4106 2020-08-07 15:58:30 +01:00
Nick Craig-Wood
8eb16ce89c onedrive: implement rclone cleanup #4106 2020-08-07 15:58:30 +01:00
Nick Craig-Wood
8e7eb37456 drive: implement backend command untrash
rclone backend untrash drive:directory

This was based on: https://gitlab.com/B4dM4n/drive-untrash

See: https://forum.rclone.org/t/rclone-teamdrive-undelete/18278/3
2020-08-07 11:10:37 +01:00
Nick Craig-Wood
324077fb48 swift: fix update multipart object removing all of its own parts
After uploading a multipart object, rclone deletes any unused parts.

Probably as part of the listing unification, the detection of the
parts beloning to the current upload was failing and calling Update
was deleting the parts for the current object.

This change fixes the detection and deletes all the old parts but none
of the new ones now.

Fixes #4075
2020-08-03 14:45:03 +01:00
Nick Craig-Wood
f50ab981f7 drive: stop using root_folder_id as a cache #4419
Previous to this change rclone cached the looked up root_folder_id in
the root_folder_id config variable.

This has caused a lot of confusion and a few attempts at workarounds
and ultimately was a mistake.

This reverts rclone attempting to cache anything in root_folder_id and
returns that variable to be entirely user modified.

It gives a little hint in the debug that rclone could be sped up
slightly by setting it, but it is up to the user to think about
whether that would be OK or not.

    Google drive root '': root_folder_id = "XXX" - save this in the config to speed up startup

It does not change root_folder_id itself, leaving this to the user.

See: https://forum.rclone.org/t/rclone-gdrive-no-longer-returning-anything/17215
2020-08-02 11:47:07 +01:00
Nick Craig-Wood
a2afa9aadd fs: Add directory to optional Purge interface - fixes #1891
- add a directory to the optional Purge interface
- fix up all the backends
- add an additional integration test to test for the feature
- use the new feature in operations.Purge

Many of the backends had been prepared in advance for this so the
change was trivial for them.
2020-07-31 17:43:17 +01:00
tyhuber1
bf355c4527
local: Add --local-no-set-modtime option to prevent modtime changes
If this option is enabled, rclone will not set modtime of uploaded files and
the backend will return ModTimeNotSupported as its Precision.

Normally rclone updates modification time of files after they are done
uploading. This can cause permissions issues on Linux platforms when
rclone is copying to a CIFS mount where the user rclone is
running as does not own the file uploaded. If this option is enabled,
rclone will no longer update the modtime after copying a file.

See: https://forum.rclone.org/t/chtimes-error-on-local-mounted-copy/17784
2020-07-30 16:43:17 +01:00
Nick Craig-Wood
0bab9903ee drive: factor creation of the Fs so it can be re-used in team drive listing 2020-07-28 16:24:00 +01:00
Nick Craig-Wood
700deb0a81 drive: add rclone backend drives to list shared drives (teamdrives)
See: https://forum.rclone.org/t/google-drive-remotes-team-drive-list-commend/17595
2020-07-28 16:24:00 +01:00
David
8bf265c775 box: allow authentication with access token - fixes #4114 2020-07-28 11:43:44 +01:00
Nick Craig-Wood
d5f4c74697 s3: implement cleanup and backend command to list & remove multipart uploads
This implements `rclone cleanup` to remove multipart uploads over 24
hours old. It also implements the backend command
`list-multipart-uploads` to see which ones are available and `cleanup`
to delete them with a configurable expiry interval.

See #4302
2020-07-28 11:37:46 +01:00
Nick Craig-Wood
2288a5c617 s3: implement profile and shared_credentials_file options
It is impossible to use two different profiles at the same time -
these config vars enable that.

See: https://forum.rclone.org/t/s3-source-destination-named-profile/17417
2020-07-28 11:32:32 +01:00
Nick Craig-Wood
957311f479 b2: fix transfers when using download_url
Before this fix, if an object had ID set and download_url was in use,
downloading the object would give this error:

    failed to open for download: bucket example_bucket does not have file: /b2api/v1/b2_download_file_by_id (404 not_found)

After this fix we only download by ID if download_url is not set

See: https://forum.rclone.org/t/correct-format-for-rclone-b2-download-url-variable/15498
2020-07-28 11:30:01 +01:00
Nick Craig-Wood
f406dbbb4d s3: add --s3-no-check-bucket for minimising rclone transactions and perms
Fixes #4449
2020-07-27 17:49:40 +01:00
Nick Craig-Wood
101f82c6b3 drive: drop "Disabling ListR" messages down to debug
This was causing unecessary anguish for users since these messages are
harmless and really only interesting for debugging.

See: https://forum.rclone.org/t/rclone-gdrive-error/18098
2020-07-25 16:50:55 +01:00
Nick Craig-Wood
d35673efc6 webdav: fix directory creation with 4shared - fixes #4428
When we run MKCOL on 4shared on a directory that already exists, this
returns a 409/Conflict error. However this error code usually means
that the intermediate collections need creating.

The actual error code to return when trying to create a directory that
already exists isn't specified in the RFC, only that an error MUST be
returned and there are already 3 statuses checked in the code.

However using 409 makes rclone's usual strategy for making directories
fail and return the 409 error.

This patch tries the MKCOL and if it returns an unrecognised error
code, then calls PROPFIND on the directory to discover whether the
directory really exists or not.

This should also cover other WebDAV servers returning other error
messages we haven't accounted for in the code yet.
2020-07-24 17:26:42 +01:00
Nick Craig-Wood
8f9d5af26d cache: remove mount tests as they aren't being run and cause maintenance issues
Before this change the cache backend contained its own routines for
mounting testing on that mount.

These tests are never run on the CI and cause a maintenance burden.

This commit removes the tests.
2020-07-24 11:57:49 +01:00
Nick Craig-Wood
0272a7f405 mount: change interface of mount commands to take mount options
This is in preparation of being able to pass mount options to the rc
command "mount/mount"
2020-07-24 10:48:51 +01:00
Nick Craig-Wood
2871268505 mount: change interface of mount commands to take VFS
This is in preparation of being able to pass options to the rc command
"mount/mount"
2020-07-23 12:30:41 +01:00
Nick Craig-Wood
d2efb4b29b ftp: add support for --dump bodies and --dump auth
See: https://forum.rclone.org/t/rclone-copy-gives-error-connection-reset-by-peer-using-ftp/17934/27
2020-07-21 16:26:31 +01:00
Nick Craig-Wood
80d2f38192 s3: fix bucket Region auto detection when Region unset in config #2915
Previous to this fix if Region was not set and Endpoint was not set
then we set the endpoint to "https://s3.amazonaws.com/".

This is unecessary because if the Region alone isn't set then we set
it to "us-east-1" which has the same endpoint.

Having the endpoint set breaks the bucket region auto detection with
the error "Failed to update region for bucket: can't set region to
"xxx" as endpoint is set".

This fix removes that check.
2020-07-10 17:16:59 +01:00
Nick Craig-Wood
0792f4722c swift: fix purge not deleting directory markers
At some point Purge stopped deleting directory markers. We don't have
an integration test for this so it went unnoticed.

This patch fixes the problem but doesn't introduce an integration test
as we don't have a framework for making directory markers yet.
2020-07-10 15:16:11 +01:00
Nick Craig-Wood
db37360a1d swift: fix dangling large objects breaking the listing
Before this change, large objects which had had their contents deleted
would return "Object not found" and break the listing.

This change makes these objects appear as 0 sized entities so they can
be listed and deleted.
2020-07-10 11:03:08 +01:00
Nick Craig-Wood
d4b2709fb0 pcloud: fix oauth on European region "eapi.pcloud.com"
Pcloud appears to have opened up a new region and they are returning
the hostname in the oauth callback, thus

    GET /?code=XXX&locationid=1&hostname=api.pcloud.com&state=XXX HTTP/1.1
    GET /?code=XXX&locationid=2&hostname=eapi.pcloud.com&state=XXX HTTP/1.1

This isn't documented yet, however pCloud have confirmed that this is
the correct interpretation.

Rclone now reads the "hostname" parameter in the oauth callback and
stores it in the config file. It uses it for all subequent API calls.
2020-07-03 20:38:42 +01:00
Nick Craig-Wood
e6fdc3a932 drive: make dangling shortcuts appear in listings
Previous to this a dangling shortcut would error the directory
listing.

This patch makes dangling shortcuts appear as 0 sized objects in the
directory listing so they can be deleted. These objects can't be read
though.
2020-07-02 22:12:44 +01:00
Nick Craig-Wood
50e36fb482 onedrive: Fix reverting to Copy when Move would have worked
For some objects the onedrive backend has been doing a server side
copy and a delete when a server side move would have worked OK.

This was caused by not detecting the home drive correctly (when it was
an empty string) and assuming that these transfers were cross drive.

This is fixed by comparing canonicalizing drive IDs before comparing them.
2020-07-02 10:55:36 +01:00
Kai Lüke
54f2587c1e gcs: add support for anonymous access
Currently credentials are required to download a public bucket file
which is not really necessary and makes automated usage more complex.
Add a new option "anonymous" which when enabled configures the gcs
backend to use an anonymous HTTP client. This of course only works
for read access and trying to write will lead to errors like that:
"googleapi: Error 401: Anonymous caller does not not have
storage.objects.create access to the Google Cloud Storage object.",
as expected. By default the anonymous access option is disabled so that
the GCS Application Default Credentials are still used by default as
before and an error is given if they can't be found.
2020-07-01 20:54:49 +01:00
Nick Craig-Wood
fefcbf60fa sftp: use the absolute path instead of the relative path
Before this change rclone used the relative path from the current
working directory.

It appears that WS FTP doesn't like this and the openssh sftp tool
also uses absolute paths which is a good reason for switching to
absolute paths.

This change reads the current working directory at startup and bases
all file requests from there.

See: https://forum.rclone.org/t/sftp-ssh-fx-failure-directory-not-found/17436
2020-06-30 16:07:23 +01:00
Nick Craig-Wood
20f4fda3c9 local: fix race conditions updating and reading Object metadata 2020-06-30 12:03:39 +01:00
Nick Craig-Wood
7622506fe2 local: factor UNCPath into lib/file 2020-06-30 12:01:36 +01:00
Nick Craig-Wood
c820576329 fs: define SlowModTime and SlowHash features in the relevant backends 2020-06-30 12:01:36 +01:00
Nick Craig-Wood
2a3b377d34 azureblob: don't compile on < go1.13 after dependency update 2020-06-29 14:45:39 +01:00
Nick Craig-Wood
61ff7306ae crypt: add --crypt-server-side-across-configs flag
This can be used for changing filename encryption mode without
re-uploading data.

See: https://forum.rclone.org/t/revert-filename-encryption-method/17454/
2020-06-27 11:40:15 +01:00
Nick Craig-Wood
0bcf4769fe local: make --local-no-updated provide a consistent view of the objects
Before this change the --local-no-updated flag would not error if the
files changed in size during the transfer. The file could still be
read beyond the size advertised though which caused problems with
certain backends.

After this change we attempt to provide a consistent view of the file
once it has been opened.

Once the file has had stat() called on it for the first time we

- Only transfer the size that stat gave
- Only checksum the size that stat gave
- Don't update the stat info for the file

This means that files that are extending can be transferred - rclone
will transfer the length it saw the first time it listed the file.

See: https://forum.rclone.org/t/transport-connection-broken/16494/21
2020-06-27 10:00:43 +01:00
David
9058ec32e1 s3: Use regional s3 us-east-1 endpoint 2020-06-26 16:25:52 +01:00
Nick Craig-Wood
61e4b4db42 drive: Allow the use of --drive-impersonate with the root_folder_id "appDataFolder"
In this commit

5c5ad6220 drive: fix --drive-impersonate with cached root_folder_id

We disabled the use of root_folder_id with --drive-impersonate to fix
a problem with a cached root_folder_id giving the wrong results.

This, alas, broke one users setup with a root_folder_id of
appDataFolder. Since this is identifiable and definitely couldn't have
been cached, we can safely skip this check in this case.

See: https://forum.rclone.org/t/rclone-gdrive-no-longer-returning-anything/17215/10
2020-06-25 21:43:11 +01:00
Nick Craig-Wood
fd7c63bc78 s3: add backend restore command to restore objects from GLACIER
See: https://forum.rclone.org/t/rclone-settier-fails-with-scaleway-entitytoolarge/17384
2020-06-25 21:33:23 +01:00
Nick Craig-Wood
49a7d08a40 qingstor: cancel in progress multipart uploads on rclone exit #4300 2020-06-25 15:22:53 +01:00
Nick Craig-Wood
2c10ce64aa onedrive: rework cancel of multipart uploads on rclone exit #4300
This now uses the atexit.OnError framework rather than a home grown one.
2020-06-25 15:22:53 +01:00
Nick Craig-Wood
a41a294e1d box: cancel in progress multipart uploads and copies on rclone exit #4300 2020-06-25 15:22:53 +01:00
Nick Craig-Wood
47b17dc1bb b2: cancel in progress multipart uploads and copies on rclone exit #4300 2020-06-25 15:22:53 +01:00
Nick Craig-Wood
5f75444ef6 s3: cancel in progress multipart uploads and copies on rclone exit #4300 2020-06-25 12:55:56 +01:00
Nick Craig-Wood
2121c0fa23 dircache: factor DirMove code out of backends into dircache
Before this change there was lots of duplicated code in all the
dircache using backends to support DirMove.

This change factors this code into the dircache library.
2020-06-25 09:41:36 +01:00
Nick Craig-Wood
a8652e2252 dircache: simplify interface, fix corner cases and apply to backends
Dircache was changed to:

- Remove special cases for the root directory
- Remove Fatal errors
- Call FindRoot on behalf of the user wherever possible
- Bring up to modern Go standards

Backends were changed to:

- Remove calls to FindRoot
- Change calls to FindRootAndPath to FindPath
- Don't make special cases for the root

This fixes several corner cases, for example removing a non existent
directory if FindRoot hasn't been called.
2020-06-25 09:41:36 +01:00
Nick Craig-Wood
81151523af drive: fix shortcut tests 2020-06-24 15:52:02 +01:00
Nick Craig-Wood
0dba7b8a46 swift: speed up deletes by not retrying segment container deletes
Before this fix rclone would continually try to delete non empty
segment containers which made deleting lots of files very slow.

This fix makes rclone just try the delete once and then carry on which
was the original intent of the code before the retry logic got put in.
2020-06-24 10:01:24 +01:00
buengese
e247811db5 jottacloud: remove debug Printf accidentally left in 2020-06-23 13:16:23 +02:00
buengese
ce767bc3cf pcloud: implement PublicLink 2020-06-21 17:22:56 +02:00
Nick Craig-Wood
a55d882b7b webdav: Fix free/used display for rclone about/df for certain backends - fixes #4348
Before this change if the server sent us xml like this

```
<D:propstat>
<D:prop>
<g0:quota-available-bytes/>
<g0:quota-used-bytes/>
</D:prop>
<D:status>HTTP/1.1 404 Not Found</D:status>
</D:propstat>
```

Rclone would read the empty XML items as containing 0

After this fix we make sure that we have a value before using it.
2020-06-20 15:15:15 +01:00
Nick Craig-Wood
5c5ad62208 drive: fix --drive-impersonate with cached root_folder_id
Before this fix rclone v1.51 and 1.52 would incorrectly use the cached
root_folder_id when the --drive-impersonate flag was in use. This
meant that rclone could be looking up the wrong directory ID with
unpredictable results - usually all files apparently being missing.

This fix makes rclone look up the root_folder_id always when using
--drive-impersonate. It does this by clearing the root_folder_id and
making a NOTICE message that it is ignoring the cached value.

It also stops rclone caching the root_folder_id when using
--drive-impersonate.

See: https://forum.rclone.org/t/rclone-gdrive-no-longer-returning-anything/17215
2020-06-20 15:01:37 +01:00
buengese
b6b8958fb4 box: implement CleanUp - fixes #4326 2020-06-18 23:39:59 +02:00
Nick Craig-Wood
d8eea0e397 build: run gofmt -s to simplify the code: suggested by Go Report Card 2020-06-18 18:45:39 +01:00
Nick Craig-Wood
df9c930581 dropbox: fix public link by removing expires parameter
Adding the expires parameter gives settings_error/not_authorized/.. errors.

The expires setting isn't in the documentation so this commit removes
it for now.
2020-06-18 18:40:33 +01:00
Nick Craig-Wood
85bcacac90 s3: Cap expiry duration to 1 Week and return error when sharing dir 2020-06-18 17:50:50 +01:00
Nick Craig-Wood
5e6f4ab281 drive: fix creating a directory inside a shortcut
See: https://forum.rclone.org/t/cant-create-new-directory-on-google-drive-remote/17208
2020-06-17 11:32:28 +01:00
buengese
2c4f7b61c1 jottacloud: switch to new api root - fixes #4295
- also implement a very ugly workaround for the DirMove failures
2020-06-16 15:44:34 +02:00
Heiko Bornholdt
17d5a72416 ftp: add explicit tls support
Add support for explicit FTP over TLS.

Fixes #4100
2020-06-16 09:13:50 +01:00
Nick Craig-Wood
b58bb03e95 test: Don't run unreliable tests on CI #4171 2020-06-15 21:34:37 +01:00
Vincent Feltz
f4d7e41f24 s3: add Scaleway provider - fixes #4338 2020-06-13 11:55:37 +01:00
Zac Rubin
f9306218f8 sftp: Fix SSH key PEM loading
For SSH authentication, `key_pem` should both override `key_file`
and not require other SSH authentication methods to be set.

Prior to this fix, rclone would attempt to use an ssh-agent
when `key_pem` was the only SSH authentication method set.

Fixes #4240
2020-06-12 22:46:33 +01:00
Nick Craig-Wood
848c5b78e1 drive: fix not being able to delete a directory with a trashed shortcut
When we resolve the shortcut we now propagate the trashed status of
the shortcut into the resolved item which fixes the issue.
2020-06-12 15:10:35 +01:00
buengese
84d5df3c84 jottacloud: bring back legacy authentification for use with whitelabel versions - fixes #4299 2020-06-12 12:08:27 +02:00
Nick Craig-Wood
7e48ee8758 cache: fix dedupe on caches wrapping drives - fixes #4320
This implements the MergeDirs optional method.
2020-06-10 21:52:52 +01:00
Nick Craig-Wood
2ea15a72bc s3: fix --header-upload - Fixes #4303
Before this change we were setting the headers on the PUT
request for normal and multipart uploads. For normal uploads this caused the error

    403 Forbidden: There were headers present in the request which were not signed

After this fix we set the headers in the object upload request itself
as the s3 SDK expects.

This means that we only support a limited range of headers

- Cache-Control
- Content-Disposition
- Content-Encoding
- Content-Language
- Content-Type
- X-Amz-Tagging
- X-Amz-Meta-

Note for the last of those are for setting custom metadata in the form
"X-Amz-Meta-Key: value".

This now works for multipart uploads and single part uploads

See also #59
2020-06-10 12:28:48 +01:00
Cenk Alti
16422a6b78 putio: fix panic on Object.Open #4315 2020-06-10 12:16:09 +01:00
Caleb Case
40fe97e946 backend/tardigrade: Set UserAgent to rclone
This provides two things:

* It gives Storj insight into which uplink clients are using the
  network.
* It facilitate rclone participating in the Tardigrade Open Source
  Partner Program https://tardigrade.io/partner/
2020-06-09 14:20:28 +01:00
Kamil Trzciński
7458d37d2a
s3: add max_upload_parts support - fixes #4159
* s3: add `max_upload_parts` support

This allows to configure a maximum amount of chunks used to upload file:

- Support Scaleway which has a limit of 1k chunks currently
- Reduce a cost on S3 when each request costs some money at the expense of memory used

Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2020-06-08 18:22:34 +01:00
Roman Kredentser
c0521791db s3: implement link sharing with PublicLink 2020-06-05 14:51:05 +01:00
Roman Kredentser
55ad1354b6 link: Add --expire and --unlink flags
This adds expire and unlink fields to the PublicLink interface.

This fixes up the affected backends and removes unlink parameters
where they are present.
2020-06-05 14:51:05 +01:00
Nick Craig-Wood
fb61ed8506 b2: Implement server side copy for files > 5GB - fixes #3991
This factors copy out of SetModTime and Copy so it can be called from
both places.

This also reworks all the multipart uploading to use sync.Errgroup and
memory pooling like the other backends. This makes it more memory
efficient and handle errors better.

See: https://forum.rclone.org/t/copying-files-within-a-b2-bucket/16680/10
2020-06-05 13:27:53 +01:00
Nick Craig-Wood
973e3d6a7b backends: make sure backends expand ~ and environment vars in file names they use
See: https://forum.rclone.org/t/relative-path-in-rclone-config-service-account-json/16693
2020-06-03 17:39:08 +01:00
Nick Craig-Wood
151f03378f s3: fix upload of single files into buckets without create permission
Before this change, attempting to upload a single file into an s3
bucket which did not have create permission gave AccessDenied: Access
Denied error when it tried to create the bucket.

This was masked until e2bf91452a was
fixed.

This fix marks the bucket as OK if a fetch on an object indicates it
is OK. This stops rclone thinking it has to create the bucket in the
first place.

Fixes #4297
2020-06-02 14:33:21 +01:00
Nick Craig-Wood
cbf3d43561 drive: fix missing items when listing using --fast-list / ListR
This is caused by a bug in Google drive where, in some circumstances
querying for "(A in parents) or (B in parents)" returns nothing
whereas querying for "A in parents" and "B in parents" separately
works fine.

This has been reported here:

https://issuetracker.google.com/issues/149522397

This workaround detects this condition by seeing if a listing for more
than one directory at once returns nothing.

If it does then it retries each one individually.

This can potentially have a false positive if the user has multiple
empty directories which are queried at once. The consequence of this
will be that ListR is disabled for a while until the directories are
found to be actually empty in which case it will be re-enabled.

Fixes #3114 and Fixes #4289
2020-05-31 11:44:15 +01:00
Nick Craig-Wood
74b8cbfb84 docs: set unsafe HTML parsing to false and fix raw HTML insertion
This means that markdown files can't contain <thing> any more.
2020-05-27 17:31:09 +01:00
Nick Craig-Wood
78ca08ba8a pcloud: fix initial config "Auth state doesn't match" message #4210
pCloud should be passing back the state parameter that rclone passed
in on config but it seems to have got lost somewhere.

This sets a work-around for the pCloud backend allowing an empty state
parameter.

See: https://forum.rclone.org/t/cannot-connect-to-pcloud/16592
See: https://forum.rclone.org/t/cannot-create-pcloud-config-file-on-osx/16583
2020-05-26 11:27:01 +01:00
Nick Craig-Wood
49ba4eeb86 oauthutil: tidy interface to Config to add Options struct
The interface was getting so that a new function was needed for every
Config variant. Adding an Options struct fixes this.
2020-05-26 11:27:01 +01:00
Nick Craig-Wood
c08617c70f box: Calculate Free amount in About call 2020-05-25 16:47:34 +01:00
Martin Michlmayr
041b201abd doc: fix typos throughout docs and code 2020-05-25 11:23:58 +01:00
Nick Craig-Wood
9db8ecbc32 box: implement About to read size used - fixes #4264 2020-05-23 18:46:44 +01:00
Martin Michlmayr
a36ef8582f doc: use consistent capitalization 2020-05-20 15:54:51 +01:00
Martin Michlmayr
f34a40a709 swift: fix cosmetic issue in error message 2020-05-20 15:54:51 +01:00
Martin Michlmayr
4aee962233 doc: fix typos throughout docs and code 2020-05-20 15:54:51 +01:00
Fred
5f71d186b2 seafile: implement 2FA 2020-05-20 15:46:35 +01:00
Nick Craig-Wood
cf5d0f5c1f Revert "drive: server side copy docs use default description if empty"
This reverts commit 9e4b68a364.

This does not work as intended - it only changes docs files and to
make it change drive files would take an extra roundtrip.

I think the sematics of server side copy are now correct - additional
features should be added with a new flag.

See #4230
2020-05-19 16:48:02 +01:00
Nick Craig-Wood
bdafbad61e cache: fix tests writing to empty path
This meant the tests were writing to the current directory instead of
a temporary directory.
2020-05-19 16:01:35 +01:00
Brandon McNama
19ff7c9302 cache: Fix Server Side Copy with Temp Upload
When wrapping a backend that supports Server Side Copy (e.g. `b2`, `s3`)
and configuring the `tmp_upload_path` option, the `cache` backend would
erroneously report that Server Side Copy/Move was not supported, causing
operations such as file moves to fail. This change fixes this issue
under these circumstances such that Server Side Copy will now be used
when the wrapped backend supports it.

Fixes #3206
2020-05-19 12:17:40 +01:00
Martin Michlmayr
fb169a8b54
doc: fix typos throughout docs 2020-05-19 12:02:44 +01:00
calisro
bcbfad1482
sft[: added --sftp-pem-key to support inline key files 2020-05-19 11:55:38 +01:00
Nick Craig-Wood
610f40f700 local: implement --local-no-sparse flag for disabling sparse files #2469
This also introduces a one time warning for sparse files and updates
the docs to warn about them.
2020-05-19 10:16:43 +01:00
Brandon Philips
633f50cd3e
googlephotos: create feature/favorites directory - Fixes #4189
Enable access “Favorite” images on Google Photos backend.

This adds a “feature/favorites” folder in the Google Photos backend
and uses the Feature Filter API:

https://developers.google.com/photos/library/reference/rest/v1/mediaItems/search#Filters
2020-05-18 17:55:16 +01:00
Nick Craig-Wood
e4f1e19127 sftp: fix post transfer copies failing with 0 size when using set_modtime=false
Before this change we early exited the SetModTime call which means we
skipped reading the info about the file.

This change reads info about the file in the SetModTime call even if
we are skipping setting the modtime.

See: https://forum.rclone.org/t/sftp-and-set-modtime-false-error/16362
2020-05-14 17:30:01 +01:00
Nick Craig-Wood
4a1b644bfb azureblob: implement streaming of unknown sized files
See: https://forum.rclone.org/t/rclone-rcat-azure-blob-container-sas-token-403-error/16286/3
2020-05-14 11:56:15 +01:00
Nick Craig-Wood
8c9c86c3d6 putio: fix parsing of remotes with leading and trailing /
See: https://forum.rclone.org/t/unable-to-copy-from-remote-but-mount-works/16351/
2020-05-14 11:52:43 +01:00
Nick Craig-Wood
8a58e0235d s3: don't leak memory or tokens in edge cases for multipart upload 2020-05-14 07:48:18 +01:00
Nick Craig-Wood
52b7337d28 crypt: change backend encode/decode to output a plain list
This commit changes the output of the rclone backend encode crypt: and
decode commands to output a plain list of decoded or encoded file
names.

This makes the command much more useful for command line scripting.
2020-05-13 18:11:45 +01:00
Max Sum
33d9310c49
union: enable ListR when upstreams contain local
Enable fast list functions for union backend when:

- at least one of the upstreams supports fast list
- upstreams only consist of backends that support fast list and local backend.

Fixes #3000
2020-05-13 13:10:35 +01:00
Nick Craig-Wood
9e4b68a364 drive: server side copy docs use default description if empty
When server side copying Google docs files we attempt to preserve the
description.

This patch makes it so that we use the default description if the
original description was empty.

See: 6fdd7149c1 (commitcomment-38008638)
2020-05-13 12:31:37 +01:00
Nick Craig-Wood
d342f9f942 azureblob: fix permission error on SAS URL limited to container
Before this change, for some operations, eg rcat or copyto (of a file)
rclone would attempt to create the container when using a SAS URL
limited to a container.

After this change we assume the container does not need creating when
using a container SAS URL.

See: https://forum.rclone.org/t/rclone-rcat-azure-blob-container-sas-token-403-error/16286
2020-05-13 09:11:51 +01:00
Nick Craig-Wood
8ddb3fbb2e drive: fix using list recursive on shortcuts to directories 2020-05-12 17:08:05 +01:00
Nick Craig-Wood
b91e01fd22 drive: strip trailing slashes in shortcut command #4098
This also fixes typo in the name of the function, and allows making
shortcuts from the root directory which are useful in cross drive
shortcut creation.

This also adds a basic suite of tests for creating listing, removing
shortcuts.
2020-05-12 17:08:05 +01:00
Caleb Case
0ce662faad Tardigrade Backend 2020-05-12 15:56:50 +00:00
Max Sum
54b16bd054 union: implement ListR 2020-05-10 17:57:03 +00:00
Max Sum
f21e97001b union: fix server-side copy 2020-05-10 17:56:18 +00:00
Nick Craig-Wood
bb65974e2f drive: implement backend shortcut command for creating shortcuts #4098 2020-05-09 15:16:15 +01:00
Nick Craig-Wood
bc0f487369 drive: look for dirs as well as files on NewObject
This means that we can return ErrorNotAFile when there is an object
with the same name as a directory rather than potentially creating a
duplicate name.
2020-05-09 15:16:15 +01:00
Fred
c754e89906 seafile: New backend for seafile server 2020-05-06 17:33:22 +00:00
Nick Craig-Wood
afde340c9e gcs: fix --header-upload - #59
Before this code we were settig the headers on the PUT request. However this isn't where GCS needs them.

After this fix we set the headers in the object upload request itself.

This means that we only support a limited range of headers

- Cache-Control
- Content-Disposition
- Content-Encoding
- Content-Language
- Content-Type
- X-Goog-Meta-

Note for the last of those are for setting custom metadata in the form
"X-Goog-Meta-Key: value".
2020-05-06 17:34:23 +01:00
Anagh Kumar Baranwal
a86196a156 drive: Added command to change service_account_file and chunk_size
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2020-05-04 16:23:33 +00:00
Anagh Kumar Baranwal
856c2b565f crypt: Added decode/encode commands to replicate functionality of cryptdecode
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2020-05-04 16:23:33 +00:00
Nick Craig-Wood
14cab0fff0 local: fix "file not found" errors on post transfer Hash calculation
Before this change the local backend was returning file not found
errors for post transfer hashes for files which were moved. This was
caused by the routine which checks for the object being changed.

After this change we ignore file not found errors while checking to
see if the object has changed. If the hash has to be computed then a
file not found error will be thrown when it is opened, otherwise the
cached hash will be returned.
2020-05-04 12:17:46 +01:00
Nick Craig-Wood
f2b1fedc4f drive: follow shortcuts by default, skip with --drive-skip-shortcuts
Before this change rclone would skip all shortcuts with a message

    Ignoring unknown document type "application/vnd.google-apps.shortcut"

After this message rclone resolves the shortcuts by default to the
actual files that they point to. See the docs for more info.

The --drive-skip-shortcuts flag can be used to skip shortcuts.
2020-05-02 18:28:38 +01:00
Nick Craig-Wood
b52a39a84e drive: fix merge breakage
In 2f5a2d3c48 an incorrect merge caused compilation to fail
2020-05-01 13:02:32 +01:00
Nick Craig-Wood
2f5a2d3c48 drive: Don't return nil Object with nil error from newObject* functions.
Before this change the newObject* functions could return object=nil
with err=nil.  The result of these functions are passed outside of the
backend code (eg in Copy, Move) and returning a nil object with a nil
error leads to crashes elsewhere as it breaks expectations.

After this change we return (nil, fs.ErrorObjectNotFound) in these
cases. The one place this is actually needd internally (when turning
items into listings) we detect that error and use it to mean skip the
directory item.

This problem was noticed while testing the shortcuts code. It
shouldn't happen normally but it is conceivable it could.
2020-04-30 17:11:36 +01:00
Nick Craig-Wood
74d9dabdff b2: force the case of the SHA1 to lowercase - fixes #4162
Apparently some tools (eg duplicati) upload the SHA1 in uppercase to
b2 to be stored in the `large_file_sha1` metadata. This patch forces
it to lower case.
2020-04-29 17:08:21 +01:00
Nick Craig-Wood
90d738b561 cache: implement rclone backend stats command 2020-04-29 10:10:57 +01:00
Nick Craig-Wood
e2916f3a55 local: implement backend command "noop" for testing purposes 2020-04-29 10:10:57 +01:00
Nick Craig-Wood
37a53570d4 azureblob: implement memory pooling to control memory use
This commit implements memory pooling to control excessive memory use
as was implemented in the s3 backend.
2020-04-28 17:47:10 +01:00
Nick Craig-Wood
ee7219aa20 azureblob: add --azureblob-disable-checksum flag 2020-04-28 17:47:10 +01:00
Nick Craig-Wood
b1d8da484b azureblob: retry InvalidBlobOrBlock error as it may indicate block concurrency problems
According to Microsoft support this error can be caused by

> A timing/concurrency issue where the PUT operations are happening
> about the same time for a single blob. The Put Block List operation
> writes a blob by specifying the list of block IDs that make up the
> blob. In order to be written as part of a blob, a block must have
> been successfully written to the server in a prior Put Block
> operation.
>
> Documentation reference:
>
> https://docs.microsoft.com/en-us/rest/api/storageservices/put-block
>
> This error can happen when doing concurrent upload commits after you
> have started the upload but before you commit. In that case, the
> upload fails. The application can retry this error or attempt some
> other recovery action based on the required scenario.

See: https://forum.rclone.org/t/error-while-syncing-with-azure-blob-storage-x-ms-error-code-invalidbloborblock/15561
2020-04-28 17:47:10 +01:00
Nick Craig-Wood
4e869e03f7 s3: improve docs for --s3-disable-checksum 2020-04-28 17:47:10 +01:00
Nick Craig-Wood
52c9647b06 b2: improve docs for --b2-disable-checksum 2020-04-28 17:47:10 +01:00
Nick Craig-Wood
551a829eba googlephotos: don't put an image in error message - fixes #4144
For a certain class of broken or missing image Google Photos puts an
image in the error message.

Before this fix we blindly chucked it into the error message.

After this fix we replace it with some sensible text.
2020-04-28 16:51:47 +01:00
Adam Stroud
8e91f83174 googlecloudstorage: Add ARCHIVE storage class to help 2020-04-27 11:40:21 +01:00
buengese
7f776c64f0 fichier: implement custom pacer to deal with the new rate limiting 2020-04-26 20:38:56 +02:00
David
0c0ed2fe04 box: Remove unnecessary iat from jws claims 2020-04-23 17:52:14 +01:00
Nick Craig-Wood
ab6ed256e5 putio: add support for --header-upload and --header-download #59 2020-04-23 15:55:52 +01:00
Nick Craig-Wood
7c98ecd3ab putio: make downloading files use the rclone http Client
This fixes `--download-header` and these transactions being missed from
`--dump bodies` or `--tpslimit`
2020-04-23 15:48:30 +01:00
Nick Craig-Wood
b502a74cff gcs: add support for --header-upload and --header-download #59 2020-04-23 11:41:57 +01:00
Nick Craig-Wood
8e9c25063a swift: add support for --header-upload and --header-download #59 2020-04-23 11:34:36 +01:00
Tim Gallant
c390fc8100 onedrive: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant
14f6ce1e77 premiumizeme: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant
385542e2f9 sharefile: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant
fc946d0c44 fichier: pass options to rest.Opts for uploadFile 2020-04-23 11:07:21 +01:00
Tim Gallant
854c84d0ca pcloud: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant
90bd0eb44c webdav: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant
3130f870bb sugarsync: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant
51b617f601 yandex: pass options to rest.Opts for upload 2020-04-23 11:07:21 +01:00
Tim Gallant
011ca244b2 jottacloud: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant
9ea1361044 googlephotos: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant
776966e22c opendrive: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant
01cb256b84 box: pass options to rest.Opts for uploadPart 2020-04-23 11:07:21 +01:00
Tim Gallant
0b0163dde2 box: pass options to rest.Opts for upload 2020-04-23 11:07:21 +01:00
Tim Gallant
38123c70eb b2: pass options to rest.Opts for Update 2020-04-23 11:07:21 +01:00
Tim Gallant
5cb7229a16 s3: add support for HTTPOption 2020-04-23 11:07:21 +01:00
Nick Craig-Wood
f8039deb7c s3: fix detection of BucketAlreadyOwnedByYou and BucketAlreadyExists error
This was being silently ignored until this commit

e2bf91452a s3: report errors on bucket creation (mkdir) correctly
2020-04-22 18:14:03 +01:00
Sunil Patra
39319b4858 @Sunil-P
box: Added support for interchangeable root folder for Box backend - #3422
2020-04-22 17:00:13 +01:00
Sunil Patra
4af5c9aed7 pCloud: Added support for interchangeable root folder for pCloud backend - #3957 2020-04-22 16:58:01 +01:00
David Bramwell
8a3c4c6a7b
box: add token renew function for jwt auth - Fixes #4901 2020-04-22 16:53:03 +01:00
Nick Craig-Wood
1648c1a0f3 crypt: calculate hashes for uploads from local disk
Before this change crypt would not calculate hashes for files it was
uploading. This is because, in the general case, they have to be
downloaded, encrypted and hashed which is too resource intensive.

However this causes backends which need the hash first before
uploading (eg s3/b2 when uploading chunked files) not to have a hash
of the file. This causes cryptcheck to complain about missing hashes
on large files uploaded via s3/b2.

This change calculates hashes for the upload if the upload is coming
from a local filesystem. It does this by encrypting and hashing the
local file re-using the code used by cryptcheck. For a local disk this
is not a lot more intensive than calculating the hash.

See: https://forum.rclone.org/t/strange-output-for-cryptcheck/15437
Fixes: #2809
2020-04-22 11:33:48 +01:00