Commit Graph

1593 Commits

Author SHA1 Message Date
Lesmiscore
67240bd541
sftp: fix directory creation races
If mkdir fails then before this change it would have thrown an
error.

After this change, if the error indicated that the directory
already exists then the error is not returned to the user.

This fixes a race condition when two rclone threads are trying to
create the same directory.
2022-09-14 16:45:35 +01:00
Øyvind Heddeland Instefjord
91f8894285 ftp: Add force_list_hidden option
Forces the use of `LIST -a` command
when listing a directory which should
list all hidden folders and files.
2022-09-14 12:10:58 +01:00
Nick Craig-Wood
bd787e8f45 filter: Fix incorrect filtering with UseFilter context flag and wrapping backends
In this commit

8d1fff9a82 local: obey file filters in listing to fix errors on excluded files

We started using filters in the local backend so the user could short
circuit troublesome files/directories at a low level.

However this caused a number of integration tests to fail. This turned
out to be in backends wrapping the local backend. For example the
combine backend test failed because it changes the paths passed to the
local backend so they no longer match the paths in the current filter.

To fix this, a new feature flag `FilterAware` was added and the
UseFilter context flag is only passed to backends which support it. As
the wrapping backends don't support the flag, this fixes the problems
in the integration tests.

In future the wrapping backends could modify the active filters to
match the path modifications and then they could set the FilterAware
flag.

See #6376
2022-09-05 16:19:50 +01:00
Nick Craig-Wood
d08ed7d1e9 ftp: add notes on how to avoid deadlocks with concurrency - fixes #6370 2022-09-05 12:11:06 +01:00
albertony
35349657cd docs/sftp: document use of chunk_size option in sftp remote paired with serve sftp
Related to 0008cb4934
2022-08-31 00:04:04 +02:00
Josh Soref
ce3b65e6dc all: fix spelling across the project
* abcdefghijklmnopqrstuvwxyz
* accounting
* additional
* allowed
* almost
* already
* appropriately
* arise
* bandwidth
* behave
* bidirectional
* brackets
* cached
* characters
* cloud
* committing
* concatenating
* configured
* constructs
* current
* cutoff
* deferred
* different
* directory
* disposition
* dropbox
* either way
* error
* excess
* experiments
* explicitly
* externally
* files
* github
* gzipped
* hierarchies
* huffman
* hyphen
* implicitly
* independent
* insensitive
* integrity
* libraries
* literally
* metadata
* mimics
* missing
* modification
* multipart
* multiple
* nightmare
* nonexistent
* number
* obscure
* ourselves
* overridden
* potatoes
* preexisting
* priority
* received
* remote
* replacement
* represents
* reproducibility
* response
* satisfies
* sensitive
* separately
* separator
* specifying
* string
* successful
* synchronization
* syncing
* šenfeld
* take
* temporarily
* testcontents
* that
* the
* themselves
* throttling
* timeout
* transaction
* transferred
* unnecessary
* using
* webbrowser
* which
* with
* workspace

Signed-off-by: Josh Soref <2119212+jsoref@users.noreply.github.com>
2022-08-30 11:16:26 +02:00
YFdyh000
b5818454f7
onedrive: cleanup brand name 2022-08-30 10:23:29 +02:00
albertony
555def2da7 build: add package comments to silence revive linter 2022-08-28 13:43:51 +02:00
João Henrique Franco
85eb9776bd
crypt: fix typo in comment
strign -> string
2022-08-22 10:43:54 +02:00
Nick Craig-Wood
8d1fff9a82 local: obey file filters in listing to fix errors on excluded files
Fixes #6376
2022-08-11 12:23:06 +01:00
Nick Craig-Wood
1ad22b8881 gcs: add --gcs-endpoint flag and config parameter
See: https://forum.rclone.org/t/how-to-modify-google-cloud-storage-endpoint-uri/32342
2022-08-09 17:33:21 +01:00
Nick Craig-Wood
0501773db1 azureblob,b2,s3: fix chunksize calculations producing too many parts
Before this fix, the chunksize calculator was using the previous size
of the object, not the new size of the object to calculate the chunk
sizes.

This meant that uploading a replacement object which needed a new
chunk size would fail, using too many parts.

This fix fixes the calculator to take the size explicitly.
2022-08-09 12:57:38 +01:00
Nick Craig-Wood
d347ac0154 local: disable xattr support if the filesystems indicates it is not supported
Before this change, if rclone was run with `-M` on a filesystem
without xattr support, it would error out.

This patch makes rclone detect the not supported errors and disable
xattrs from then on. It prints one ERROR level message about this.

See: https://forum.rclone.org/t/metadata-update-local-s3/32277/7
2022-08-09 09:27:56 +01:00
Nick Craig-Wood
ebe86c6cec s3: add --s3-decompress flag to download gzip-encoded files
Before this change, if an object compressed with "Content-Encoding:
gzip" was downloaded, a length and hash mismatch would occur since the
go runtime automatically decompressed the object on download.

If --s3-decompress is set, this change erases the length and hash on
compressed objects so they can be downloaded successfully, at the cost
of not being able to check the length or the hash of the downloaded
object.

If --s3-decompress is not set the compressed files will be downloaded
as-is providing compressed objects with intact size and hash
information.

See #2658
2022-08-05 16:45:23 +01:00
Nick Craig-Wood
4b981100db s3: refactor to use generated code instead of reflection to copy structs 2022-08-05 16:45:23 +01:00
Nick Craig-Wood
4344a3e2ea s3: implement --s3-version-at flag - Fixes #1776 2022-08-05 16:45:23 +01:00
Nick Craig-Wood
1542a979f9 s3: refactor f.list() to take an options struct as it had too many parameters 2022-08-05 16:45:23 +01:00
Nick Craig-Wood
81d242473a s3: implement Purge to purge versions and backend cleanup-hidden 2022-08-05 16:45:23 +01:00
Nick Craig-Wood
0ae171416f s3: implement --s3-versions flag - See #1776 2022-08-05 16:45:23 +01:00
Nick Craig-Wood
a59fa2977d s3: factor different listing versions into separate objects 2022-08-05 16:42:30 +01:00
Nick Craig-Wood
7243918069 s3: implement backend versioning command to get/set bucket versioning 2022-08-05 16:42:30 +01:00
Nick Craig-Wood
6fd9e3d717 build: reformat comments to pass go1.19 vet
See: https://go.dev/doc/go1.19#go-doc
2022-08-05 16:35:41 +01:00
Nick Craig-Wood
918bd6d3c3 dropox: fix ChangeNotify was unable to decrypt errors
Before this fix, the dropbox backend wasn't decoding the file names
received in changenotify events into rclone standard format.

This meant that changenotify events for filenames which had encoded
characters were failing to be decrypted properly if wrapped in crypt.

See: https://forum.rclone.org/t/rclone-vfs-cache-says-file-name-too-long/31535
2022-08-04 10:26:25 +01:00
Nick Craig-Wood
821e084f28 combine: fix errors with backends shutting down while in use
Before this patch backends could be shutdown when they fell out of the
cache when they were in use with combine. This was particularly
noticeable with the dropbox backend which gave this error when
uploading files after the backend was Shutdown.

    Failed to copy: upload failed: batcher is shutting down

This patch gets the combine remote to pin them until it is finished.

See: https://forum.rclone.org/t/rclone-combine-upload-failed-batcher-is-shutting-down/32168
2022-08-04 10:13:41 +01:00
Nick Craig-Wood
3a8e52de74 dropbox: fix infinite loop on uploading a corrupted file
Before this change, if rclone attempted to upload a file which read
more bytes than the size it declared then the uploader would enter an
infinite loop.

See: https://forum.rclone.org/t/transfer-percentages-100-again/32109
2022-07-29 17:40:05 +01:00
albertony
72227a0151 jottacloud: do not store username in config when using standard auth
Previously, with standard auth, the username would be stored in config - but only after
entering the non-standard device/mountpoint sequence during config (a feature introduced
with #5926). Regardless of that, rclone always requests the username from the api at
startup (NewFS).

In #6270 (commit 9dbed02329) this was changed to always
store username in config (consistency), and then also use it to avoid the repeated
customer info request in NewFs (performance). But, as reported in #6309, it did not work
with legacy auth, where user enters username manually, if user entered an email address
instead of the internal username required for api requests. This change was therefore
recently reverted.

The current commit takes another step back to not store the username in config during
the non-standard device/mountpoint config sequence (consistentcy). The username will
now only be stored in config when using legacy auth, where it is an input parameter.
2022-07-25 18:23:09 +01:00
Nick Craig-Wood
9f40cb114a Revert "jottacloud: always store username in config and use it to avoid initial api request"
This reverts commit 9dbed02329.

See: #6309
2022-07-25 18:23:09 +01:00
Lesmiscore
2f461f13e3 internetarchive: handle hash symbol in the middle of filename 2022-07-22 13:08:42 +01:00
albertony
f0396070eb sftp: fix issue with WS_FTP by working around failing RealPath 2022-07-20 18:07:50 +01:00
Steve Kowalik
9b76434ad5
drive: make --drive-stop-on-upload-limit obey quota exceeded error
Extend the shouldRetry function by also checking for the quotaExceeded
reason, and since this function appeared to be untested, add a test case
for the existing errors and this new one.

Fixes #615
2022-07-20 10:37:34 +01:00
Nick Craig-Wood
440d0cd179 s3: fix --s3-no-head panic: reflect: Elem of invalid type s3.PutObjectInput
In

22abd785eb s3: implement reading and writing of metadata #111

The reading information of objects was refactored to use the
s3.HeadObjectOutput structure.

Unfortunately the code branch with `--s3-no-head` was not tested
otherwise this panic would have been discovered.

This shows that this is path is not integration tested, so this adds a
new integration test.

Fixes #6322
2022-07-18 23:38:50 +01:00
Yen Hu
03d0f331f7
onedrive: rename Onedrive(cn) 21Vianet to Vnet Group
The old site had shown a redirect page to the new one since 2021-4-21.
https://www.21vianet.com
The official site had renamed to Vnet Group also.
https://www.vnet.com/en/about
2022-07-17 17:07:23 +01:00
Lesmiscore
049674aeab backend/internetarchive: ignore checksums for files using the different method 2022-07-17 14:02:40 +01:00
Nick Craig-Wood
50f053cada dropbox: fix hang on quit with --dropbox-batch-mode off
This problem was created by the fact that we are much more diligent
about calling Shutdown now, and the dropbox backend had a hang if the
batch mode was "off" in the Shutdown method.

See: https://forum.rclone.org/t/dropbox-lsjson-in-1-59-stuck-on-commiting-upload/31853
2022-07-17 12:51:44 +01:00
r-ricci
67fd60275a
union: fix panic due to misalignment of struct field in 32 bit architectures
`FS.cacheExpiry` is accessed through sync/atomic.
According to the documentation, "On ARM, 386, and 32-bit MIPS, it is
the caller's responsibility to arrange for 64-bit alignment of 64-bit
words accessed atomically. The first word in a variable or in an
allocated struct, array, or slice can be relied upon to be 64-bit
aligned."
Before commit 1d2fe0d856 this field was
aligned, but then a new field was added to the structure, causing the
test suite to panic on linux/386.
No other field is used with sync/atomic, so `cacheExpiry` can just be
placed at the beginning of the stuct to ensure it is always aligned.
2022-07-11 18:34:06 +01:00
Nick Craig-Wood
b310490fa5 union: fix multiple files being uploaded when roots don't exist
See: https://forum.rclone.org/t/union-backend-copying-to-all-remotes-while-it-shouldnt/31781
2022-07-11 18:19:36 +01:00
Nick Craig-Wood
0ee0812a2b union: fix duplicated files when using directories with leading /
See: https://forum.rclone.org/t/union-backend-copying-to-all-remotes-while-it-shouldnt/31781
2022-07-11 18:19:36 +01:00
Nick Craig-Wood
9c6cfc1ff0 combine: throw error if duplicate directory name is specified
See: https://forum.rclone.org/t/v1-59-combine-qs/31814
2022-07-10 15:40:30 +01:00
Nick Craig-Wood
f753d7cd42 combine: fix docs showing remote= instead of upstreams=
See: https://forum.rclone.org/t/v1-59-combine-qs/31814
2022-07-10 15:34:48 +01:00
Nick Craig-Wood
1c4ee2feee gcs: add --gcs-decompress flag to download gzip-encoded files
By default these will be downloaded compressed.

This changes the default of the previous commit

2781f8e2f1 gcs: Fix download of "Content-Encoding: gzip" compressed objects

But will fit in better with the metadata framework when copying
gzip-encoded objects from backend to backend.
2022-07-09 17:31:12 +01:00
Lesmiscore (Naoya Ozaki)
42dfadfa1b
internetarchive: add support for Metadata 2022-07-08 23:47:50 +01:00
Ovidiu Victor Tatar
b4d847cadd new backend: hidrive - fixes #1069 2022-07-08 18:24:54 +01:00
Lorenzo Maiorfi
b5efffee9d
azureblob: allow remote emulator (azurite) - fixes #6290 2022-07-06 11:54:04 +01:00
albertony
e5bf6a813c staticcheck: google api New is deprecated: please use NewService instead 2022-07-04 11:24:59 +02:00
albertony
986bb17656 staticcheck: awserr.BatchError is deprecated: Replaced with BatchedErrors 2022-07-04 11:24:59 +02:00
albertony
3e9c5eca3b yandex: handle api error on server-side move 2022-07-04 11:24:59 +02:00
albertony
a1fd60ec2b staticcheck: empty branch 2022-07-04 11:24:59 +02:00
albertony
7822df565e staticcheck: unused func 2022-07-04 11:24:59 +02:00
albertony
3435bf7f34 staticcheck: no value of type int64 is greater than math.MaxInt64 2022-07-04 11:24:59 +02:00
albertony
0772cae314 staticcheck: use result of type assertion to simplify cases 2022-07-04 11:24:59 +02:00
Nick Craig-Wood
06182a3443 s3: actually compress the payload for content-type gzip test 2022-07-04 09:42:49 +01:00
albertony
9dbed02329 jottacloud: always store username in config and use it to avoid initial api request
Existing version did save username in config, but only when entering the custom
device/mountpoint sequence in config. Regardless of that, it did always look up the
username at startup with an api request.

This commit improves it so that the username will always be stored in config,
and when using standard authentication it picks it from the login token instead of
requesting it from the remote api, and also in fs constructor it picks it from config
instead of requesting it from remote api (again).
2022-07-03 12:56:25 +02:00
Nick Craig-Wood
7e7a8a95e9 hasher: support metadata 2022-06-29 17:30:37 +01:00
Nick Craig-Wood
ed87ae51c0 union: support metadata 2022-06-29 17:30:37 +01:00
Nick Craig-Wood
bf4a16ae30 crypt: support metadata 2022-06-29 17:30:37 +01:00
Nick Craig-Wood
c198700812 compress: support metadata 2022-06-29 17:30:37 +01:00
Nick Craig-Wood
8c483daf85 combine: support metadata 2022-06-29 17:30:37 +01:00
Nick Craig-Wood
ba5760ff38 chunker: mark as not supporting metadata 2022-06-29 17:30:37 +01:00
Nick Craig-Wood
cd1735bb10 cache: mark as not supporting metadata 2022-06-29 17:30:37 +01:00
Nick Craig-Wood
866c873daa backend: allow wrapping backend tests to run in make quicktest 2022-06-29 17:30:37 +01:00
Nick Craig-Wood
c556e98f49 local: add Metadata support #111 2022-06-29 14:29:36 +01:00
Nick Craig-Wood
22abd785eb s3: implement reading and writing of metadata #111 2022-06-29 14:29:36 +01:00
Nick Craig-Wood
a692bd2cd4 s3: change metadata storage to normal map with lowercase keys 2022-06-29 14:29:36 +01:00
Nick Craig-Wood
461d041c4d fstest: remove spurious contents return from PutTestContents and friends 2022-06-29 11:18:02 +01:00
vyloy
326c43ab3f s3: add IDrive e2 to provider list 2022-06-28 09:12:36 +01:00
albertony
fdd2f8e6d2 Error strings should not be capitalized
Reported by staticcheck 2022.1.2 (v0.3.2)

See: staticcheck.io
2022-06-23 23:26:02 +02:00
Abhiraj
027746ef6e
drive: moved rclone_folder_id to advanced section - fixes #3463 2022-06-23 22:08:09 +02:00
albertony
dbf1234edf docs: skip "Connection" suffix from FTP, SSH/SFTP and HTTP backend names 2022-06-21 23:43:00 +02:00
albertony
bc70a95fca docs: consistent capitalization of WebDAV, DLNA, HTTP 2022-06-21 23:43:00 +02:00
Nick Craig-Wood
e95dff2fa1 drive: add backend commands exportformats and importformats for debugging 2022-06-21 14:28:53 +01:00
buengese
21c746a56c fichier: parse api error codes and them accordingly 2022-06-19 15:07:33 +02:00
Nick Craig-Wood
f7c36ce0f9 s3: unwrap SDK errors to reveal underlying errors on upload
The SDK doesn't wrap errors in a Go standard way so they can't be
unwrapped and tested for - eg fatal error.

The code looks for a Serialization or RequestError and returns the
unwrapped underlying error if possible.

This fixes the fs/operations integration tests checking for fatal
errors being returned.
2022-06-17 16:52:30 +01:00
Scott Grimes
295006f662 opendrive: resolve lag and truncate bugs fixes #5936
Co-authored-by: buengese <buengese@protonmail.com>
2022-06-17 16:48:03 +01:00
Nick Craig-Wood
c85fbebce6 s3: simplify PutObject code to use the Request.SetStreamingBody method
In this commit

e5974ac4b0 s3: use PutObject from the aws SDK to upload single part objects

rclone was made to upload objects to s3 using PUT requests rather than
using signed uploads.

However this change missed the fact that there is a supported way to
do this in the SDK using the SetStreamingBody method on the Request.

This therefore reverts a lot of the previous commit to do with making
an unsigned connection and other complication and uses the SDK
facility.
2022-06-16 23:26:19 +01:00
Nick Craig-Wood
5697dbc80f local: fix parsing of --local-nounc flag 2022-06-16 22:13:50 +01:00
Nick Craig-Wood
5e4caa69ce local: make Hash function cancelable via context 2022-06-16 22:13:50 +01:00
Nick Craig-Wood
fa48b880c2 s3: retry RequestTimeout errors
See: https://forum.rclone.org/t/s3-failed-upload-large-files-bad-request-400/27695
2022-06-16 22:13:50 +01:00
Nick Craig-Wood
60d87185e1 sftp: add --sftp-set-env option to set environment variables
Fixes #6094
2022-06-16 22:13:50 +01:00
Nick Craig-Wood
78120d40d9 sftp: add --sftp-concurrency to improve high latency transfers
See: https://forum.rclone.org/t/increasing-sftp-transfer-speed/29928
2022-06-16 22:13:50 +01:00
Nick Craig-Wood
95e0934755 sftp: add --sftp-chunk-size to control packets sizes for high latency links
See: https://forum.rclone.org/t/increasing-sftp-transfer-speed/29928
2022-06-16 22:13:50 +01:00
Nick Craig-Wood
1651429041 union: add min_free_space option for lfs/eplfs policies - fixes #6071 2022-06-16 22:13:50 +01:00
Nick Craig-Wood
29e37749b3 union: fix eplus policy to select correct entry for existing files #6071 2022-06-16 22:13:50 +01:00
Nick Craig-Wood
1e1af46a12 union: fix get free space for remotes which don't support it #6071
Before this fix GetFreeSpace returned math.MaxInt64 for remotes which
don't support reading free space, however this is used in various
comparison routines as a too large value, meaning that remotes of size
math.MaxInt64 were never being selected.

This fixes GetFreeSpace to return math.MaxInt64 - 1 so then can be selected.

It also fixes GetUsedSpace the same way however as the default for not
supported was 0 this was very unlikely to have ever caused a problem.
2022-06-16 22:13:50 +01:00
Nick Craig-Wood
1d2fe0d856 union: enable passing of options to upstreams and policies #6071
This factors out the options into a sub package so they can be passed
to upstreams and used in policies.
2022-06-16 22:13:50 +01:00
Nick Craig-Wood
4d72abf389 dropbox: fix nil pointer exception on dropbox impersonate user not found
Fixes #6139
2022-06-16 22:13:50 +01:00
Nick Craig-Wood
411013dbdc drive: add --drive-resource-key for accessing link-shared files 2022-06-16 22:13:50 +01:00
Nick Craig-Wood
e87e331f4c drive: make --drive-shared-with-me work with shared drives
Fixes #6247
2022-06-16 22:13:50 +01:00
Maciej Radzikowski
2e91287b2e
docs/s3: add note about chunk size decreasing progress accuracy 2022-06-16 22:29:36 +02:00
buengese
32f913ffbd pcloud: fix cleanup - fixes #3853 2022-06-16 15:45:42 +02:00
Sven Gerber
50c2e37aac
onedrive: add access scopes option
By default, rclone always requests read and write permissions. No matter what settings you configure in the AAD application. This option allows to explicitly request readonly permissions

Migrated read only option to access scope option and set disable_site_permission option to hidden.
2022-06-14 10:21:23 +01:00
m00594701
02b4638a22 backend: add Huawei OBS to s3 provider list 2022-06-14 09:21:01 +01:00
albertony
ec117593f1 Fix lint issues reported by staticcheck
Used staticcheck 2022.1.2 (v0.3.2)

See: staticcheck.io
2022-06-13 21:13:50 +02:00
buengese
74bd7f3381 pcloud: fix about with no free space left 2022-06-13 20:40:15 +02:00
albertony
5006ede266 mailru: use variable type int instead of time.Duration for keeping number of seconds
Fixes problem reported by staticcheck lint tool (v0.3.2):
Poorly chosen name for variable of type time.Duration (ST1011).
2022-06-13 17:42:24 +01:00
buengese
3a20929db4 uptobox: fix root path handling - fixes #5903 2022-06-12 22:36:46 +02:00
buengese
cee79f27ee zoho: add Japan and China regions 2022-06-12 15:37:30 +02:00
albertony
5db9a2f831 sftp: use vendor-specific VFS statistics extension for about if available
See #5763
2022-06-08 21:14:53 +02:00
albertony
b4091f282a sftp: add support for about and hashsum on windows server
Windows shells like cmd and powershell needs to use different quoting/escaping
of strings and paths than the unix shell, and also absolute paths must be fixed
by removing leading slash that the POSIX formatted paths have
(e.g. /C:/Users does not work in shell, it must be converted to C:/Users).

Tries to autodetect shell type (cmd, powershell, unix) on first use.

Implemented default builtin powershell functions for hashsum and about when remote
shell is powershell.

See #5763

Fixes #5758
2022-06-08 21:14:53 +02:00
Nick Craig-Wood
bb6edb3c39 build: update dependencies
Also:

- azureblob: fix compile after API change in upstream library
2022-06-08 18:29:42 +01:00
Noah Hsu
ef089dd867
webdav: add SharePoint in other specific regions support 2022-06-08 17:24:35 +01:00
albertony
e3d44612c1 jottacloud: error strings should not be capitalized 2022-06-08 17:56:37 +02:00
albertony
b2388f1294 jottacloud: refactor endpoint paths 2022-06-08 17:56:37 +02:00
albertony
a571c1fb46 jottacloud: refactor naming of different api urls 2022-06-08 17:56:37 +02:00
albertony
01340acad2 jottacloud: add support for upload to custom device and mountpoint
See #5926
2022-06-08 17:56:37 +02:00
albertony
700ca23a71 config: add utility function for backend config with list and custom input 2022-06-08 17:56:37 +02:00
Nick Craig-Wood
4b358ff43b combine: backend to combine multiple remotes in one directory tree
Fixes #5600
2022-06-08 14:57:25 +01:00
Nick Craig-Wood
e58d75e4d7 drive: make backend config -o config add a combined AllDrives remote
This adjusts

    rclone backend drives -o config drive:

So that it also emits a config section called `AllDrives` which uses
the combine backend to make a backend which combines all the shared
drives into one.

It also makes sure that all the shared drive names are valid rclone
config names, deduplicating if necessary.

Fixes #4506
2022-06-08 14:57:25 +01:00
Nick Craig-Wood
26db80c270 ftp: revert to upstream github.com/jlaffaye/ftp from our fork
...now all of our patches have been merged #5810
2022-06-08 11:58:32 +01:00
Jason Zheng
a9c49c50a0 ftp: add support for disable_utf8 option - fixes #6209 2022-06-01 19:09:37 +01:00
Nick Craig-Wood
2781f8e2f1 gcs: Fix download of "Content-Encoding: gzip" compressed objects
Before this change, if an object compressed with "Content-Encoding:
gzip" was downloaded, a length and hash mismatch would occur since the
as the go runtime automatically decompressed the object on download.

This change erases the length and hash on compressed objects so they
can be downloaded successfully, at the cost of not being able to check
the length or the hash of the downloaded object.

This also adds the --gcs-download-compressed flag to allow the
compressed files to be downloaded as-is providing compressed objects
with intact size and hash information.

Fixes #2658
2022-05-31 12:10:21 +01:00
Nick Craig-Wood
3d55f69338 dropbox: add logs to show when poll interval limits are exceeded 2022-05-31 12:09:50 +01:00
m8rge
5d6a6dd6c0 dropbox: migrate from deprecated api
Change UploadSessionFinishBatch usage to UploadSessionFinishBatchV2. Change in sdk was made in https://github.com/dropbox/dropbox-sdk-go-unofficial/pull/106
2022-05-30 17:24:18 +01:00
Rob Pickerill
6d342a3c5b azureblob: case insensitive access tier 2022-05-24 09:19:08 +01:00
Hugo Laloge
c138367df6 onedrive: Implement --poll-interval for onedrive
Implement ChangeNotifier for onedrive.
Use drive delta queries to listen for modifications.
2022-05-23 11:30:43 +01:00
Alex JOST
a34276e9b3 s3: Add Warsaw location for Scaleway
Add new location in Warsaw (Poland) to endpoints for Scaleway.

More Information:
https://blog.scaleway.com/scaleway-is-now-in-warsaw/
https://www.scaleway.com/en/docs/storage/object/how-to/create-a-bucket/
2022-05-19 14:06:16 +01:00
Nick Craig-Wood
c2baacc0a4 union: fix uploading files to union of all bucket based remotes
Before this fix, if uploading to a union consisting of all bucket
based remotes (eg s3), uploads failed with:

    Failed to copy: object not found

This was because the union backend was relying on parent directories
being created to work out which files to upload. If all the upstreams
were bucket based backends which can't hold empty directories, no
directories were created and the upload failed.

This fixes the problem by returning the upstreams used when creating
the directory for the upload, rather than searching for them again
after they've been created.

This will also make the union backend a little more efficient.

Fixes #6170
2022-05-19 13:23:41 +01:00
Nick Craig-Wood
fcec4bedbe drive: fix 404 errors on copy/server side copy objects from public folder
Before this change, copying objects from a public folder shared with a
resource key failed with a 404 error.

This is because rclone wasn't supplying the resource Key where it
should have been.

After this change rclone adds the resource Key when trying to download
or server side copy an object.

There may be more places rclone needs to supply the resource key as
this is barely documented in the API documentation.

See: https://forum.rclone.org/t/copying-files-from-a-publicly-accessible-google-drive-folder-added-as-a-shortcut-getting-error-404-server-side-copies-are-disabled/30811
2022-05-18 18:12:19 +01:00
Nick Craig-Wood
813a5e0931 s3: Remove bucket ACL configuration for Cloudflare R2
Bucket ACLs are not supported by Cloudflare R2. All buckets are
private and must be shared using a Cloudflare Worker.
2022-05-17 15:57:09 +01:00
Eng Zer Jun
4f0ddb60e7 refactor: replace strings.Replace with strings.ReplaceAll
strings.ReplaceAll(s, old, new) is a wrapper function for
strings.Replace(s, old, new, -1). But strings.ReplaceAll is more
readable and removes the hardcoded -1.

Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
2022-05-17 11:08:37 +01:00
albertony
8697f0bd26 about: improved error message 2022-05-13 12:08:10 +01:00
Derek Battams
8e5e230b81 b2: use chunksize lib to determine chunksize dynamically
Fixes #4643
2022-05-13 09:25:48 +01:00
Derek Battams
c0985e93b7 azureblob: use chunksize lib to determine chunksize dynamically 2022-05-13 09:25:48 +01:00
Derek Battams
fb4f7555c7 s3: use chunksize lib to determine chunksize dynamically 2022-05-13 09:25:48 +01:00
Erik van Velzen
9e4854955c storj: fix put
The "relative" argument was missing when Put'ing a file. This
sets an incorrect object entry in the cache, leading to the file being
unreadable when using mount functionality.

Fixes #6151
2022-05-12 20:43:54 +01:00
Vincent Murphy
319ac225e4
s3: backend restore command to skip non-GLACIER objects 2022-05-12 20:42:37 +01:00
albertony
a9d3283d97 jottacloud: fix listing output of remote with special characters
This fixes the failing integration test: TestIntegration/FsMkdir/FsPutFiles/FsIsFile
2022-05-12 20:41:07 +01:00
Nick Craig-Wood
6f91198b57 s3: Support Cloudflare R2 - fixes #5642 2022-05-12 08:49:20 +01:00
Nick Craig-Wood
e5974ac4b0 s3: use PutObject from the aws SDK to upload single part objects
Before this change rclone used presigned requests to upload single
part objects. This was because of a limitation in the SDK which didn't
allow non seekable io.Readers to be passed in.

This is incompatible with some S3 backends, and rclone wasn't adding
the `X-Amz-Content-Sha256: UNSIGNED-PAYLOAD` header which was
incompatible with other S3 backends.

The SDK now allows for this so rclone can use PutObject directly.

This sets the `X-Amz-Content-Sha256: UNSIGNED-PAYLOAD` flag on the PUT
request. However rclone will add a `Content-Md5` header if at all
possible so the body data is still protected.

Note that the old behaviour can still be configured if required with
the `use_presigned_request` config parameter.

Fixes #5422
2022-05-12 08:49:20 +01:00
Lesmiscore
1d6d41fb91
backend/internetarchive: fix uploads can take very long time
* fill in empty values for non-wait mode
* add tracking metadata to observe file change
* completely remove getHashes
* remove unreliable update tests

Closes #6150
2022-05-07 21:54:14 +01:00
ehsantdy
a446106041
s3: update Arvancloud default values and correct docs 2022-05-02 16:04:01 +01:00
SwazRGB
4cebade95d b2: Add b2-version-at flag to show file versions at time
Uses b2_list_file_versions to retrieve all file versions, and returns
the one that was active at the specified time

This is especially useful in combination with other backup tools, such
as restic, which may use rclone as a backend.
2022-04-28 16:29:13 +01:00
ehsantdy
e34c543660
s3: Add ArvanCloud AOS to provider list 2022-04-28 10:42:30 +01:00
Lesmiscore
598364ad0f backend/internetarchive: add support for Internet Archive
This adds support for Internet Archive (archive.org) Items.
2022-04-28 10:25:38 +01:00
albertony
4829527dac jottacloud: refactor timestamp handling
Jottacloud have several different apis and endpoints using a mix of different
timestamp formats. In existing code the list operation (after the recent liststream
implementation) uses format from golang's time.RFC3339 constant. Uploads (using the
allocate api) uses format from a hard coded constant with value identical to golang's
time.RFC3339. And then we have the classic JFS time format, which is similar to RFC3339
but not identical, using a different constant. Also the naming is a bit confusing,
since the term api is used both as a generic term and also as a reference to the
newer format used in the api subdomain where the allocate endpoint is located.
This commit refactors these things a bit.
2022-04-27 13:41:39 +02:00
albertony
cc8dde402f jottacloud: refactor SetModTime function
Now using the utility function for deduplication that was newly implemented to
fix an issue with server-side copy. This function uses the original, and generic,
"jfs" api (and its "cphash" feature), instead of the newer "allocate" api dedicated
for uploads. Both apis support similar deduplication functionaly that we rely on for
the SetModTime operation. One advantage of using the jfs variant is that the allocate
api is specialized for uploads, an initial request performs modtime-only changes and
deduplication if possible but if not possible it creates an incomplete file revision
and returns a special url to be used with a following request to upload missing content.
In the SetModTime function we only sent the first request, using metadata from existing
remote file but different timestamps, which lead to a modtime-only change. If, for some
reason, this should fail it would leave the incomplete revision behind. Probably not
a problem, but the jfs implementation used with this commit is simpler and
a more "standalone" request which either succeeds or fails without expecting additional
requests.
2022-04-27 12:06:36 +02:00
albertony
2b67ad17aa jottacloud: fix issue with server-side copy when destination is in trash
A strange feature (probably bug) in the api used by the server-side copy implementation
in Jottacloud backend is that if the destination file is in trash, the copy request
succeeds but the destination will still be in trash! When this situation occurs in
rclone, the copy command will fail with "Failed to copy: object not found" because
rclone verifies that the file info in the response from the copy request is valid,
and since it is marked as deleted it is treated as invalid.

This commit works around this problem by looking for this situation in the response
from the copy operation, and send an additional request to a built-in deduplication
endpoint that will restore the file from trash.

Fixes #6112
2022-04-27 12:06:36 +02:00
albertony
6da3522499 jottacloud: minor cleanup of upload response
The UploadResponse type included several properties that are no longer returned
by Jottacloud, and the backend implementation did not use them anyway.
2022-04-27 08:40:34 +02:00
Leroy van Logchem
75455d4000
azureblob: Calculate Chunksize/blocksize to stay below maxUploadParts 2022-04-26 17:37:40 +01:00
Nick Craig-Wood
82e24f521f webdav: don't override Referer if user sets it - fixes #6040 2022-04-26 08:58:31 +01:00
albertony
b1d43f8d41
jottacloud: fix scope in token request
The existing code in rclone set the value "offline_access+openid",
when encoded in body it will become "offline_access%2Bopenid". I think
this is wrong. Probably an artifact of "double urlencoding" mixup -
either in rclone or in the jottacloud cli tool version it was sniffed
from? It does work, though. The token received will have scopes "email
offline_access" in it, and the same is true if I change to only
sending "offline_access" as scope.

If a proper space delimited list of "offline_access openid"
is used in the request, the response also includes openid scope:
"openid email offline_access". I think this is more correct and this
patch implements this.

See: #6107
2022-04-22 12:52:00 +01:00
Nick Craig-Wood
6a759d936a storj: fix bucket creation on Move picked up by integration tests 2022-04-15 17:57:15 +01:00
Nick Gooding
25146b4306 googlecloudstorage: add --gcs-no-check-bucket to minimise transactions and perms
Adds a configuration option to the GCS backend to allow skipping the
check if a bucket exists before copying an object to it, much like
f406dbb added for S3.
2022-04-14 11:18:36 +01:00
Nick Craig-Wood
7bb8b8f4ba cache: fix bug after golang.org/x/time/rate update
Before this change the cache backend was passing -1 into
rate.NewLimiter to mean unlimited transactions per second.

In a recent update this immediately returns a rate limit error as
might be expected.

This patch uses rate.Inf as indicated by the docs to signal no limits
are required.
2022-04-04 20:35:17 +01:00
Nick Craig-Wood
59c242bbf6 build: update dependencies
Also:

- dropbox: fix compile after API change in upstream library
2022-04-04 20:35:17 +01:00
rafma0
be9ee1d138 putio: fix multithread download and other ranged requests
Before this change the 206 responses from putio Range requests were being
returned as errors.

This change checks for 200 and 206 in the GET response now.
2022-04-04 11:15:55 +01:00
Nick Craig-Wood
603e51c43f s3: sync providers in config description with providers 2022-03-31 17:55:54 +01:00
Berkan Teber
6ea26b508a putio: handle rate limit errors
For rate limit errors, "x-ratelimit-reset" header is now respected.
2022-03-30 12:25:53 +01:00
Nick Craig-Wood
d975196cfa dropbox: fix retries of multipart uploads with incorrect_offset error
Before this fix, rclone retries chunks of multipart uploads. However
if they had been partially received dropbox would reply with an
incorrect_offset error which rclone was ignoring.

This patch parses the new offset from the error response and uses it
to adjust the data that rclone sends so it is the same as what dropbox
is expecting.

See: https://forum.rclone.org/t/dropbox-rate-limiting-for-upload/29779
2022-03-25 15:39:01 +00:00
Nick Craig-Wood
1f39b28f49 googlecloudstorage: use the s3 pacer to speed up transactions
This commit switches Google Cloud Storage from the drive pacer to the
s3 pacer. The main difference between them is that the s3 pacer does
not limit transactions in the non-error case. This is appropriate for
a cloud storage backend where you pay for each transaction.
2022-03-25 15:28:59 +00:00
GuoXingbin
c2bfda22ab
s3: Add ChinaMobile EOS to provider list
China Mobile Ecloud Elastic Object Storage (EOS) is a cloud object storage service, and is fully compatible with S3.

Fixes #6054
2022-03-24 11:57:00 +00:00