Commit Graph

2098 Commits

Author SHA1 Message Date
IoT Maestro
748c43d525 ulozto: set Content-Length header if the file size is known. 2024-03-29 09:07:44 +00:00
nielash
f62e7b5b30 memory: fix incorrect list entries when rooted at subdirectory
Before this change, List would return incorrect directory paths (relative to the
wrong root) if the Fs root pointed to a subdirectory. For example, listing dir
"a/b/c/d" of remote :memory: would work correctly, but listing dir "c/d" of
remote :memory:a/b would not, and would result in "Entry doesn't belong in
directory %q (contains subdir)" errors.

This change fixes the issue and adds a test to detect any other backends that
might have the same issue.
2024-03-27 11:43:26 -04:00
nielash
2b0a25a64d memory: fix deadlock in operations.Purge
Before this change, the Memory backend had the potential to deadlock under
certain conditions, if the ListR callback required locking the b.mu mutex. This
was the case with operations.Purge, because Memory has no Purge method, and the
fallback option does:

	err = DeleteFiles(ctx, listToChan(ctx, f, dir))

which potentially starts removing objects before the listing has completed.

This change fixes the issue by batching all the entries before calling the
callback on them.
2024-03-27 11:42:49 -04:00
nielash
fecce67ac6 memory: fix dst mutating src after server-side copy
Before this change, the Memory backend's Copy method created a dst object that
referenced the src's objectData by pointer instead of making a copy. While this
minimized memory usage, an unintended consequence was that subsequently mutating
the src (such as changing the modtime) would inadvertently also mutate the dst,
and vice versa.

This change fixes the issue and adds a test.
2024-03-26 20:40:06 -04:00
Nick Craig-Wood
ac6ba11d22 local: add --local-time-type to use mtime/atime/btime/ctime as the time
Fixes #7484
2024-03-26 11:58:28 +00:00
Nick Craig-Wood
efed6b01d2 drive: fix server side copy with metadata from my drive to shared drive
Before this change trying to server side copy an object from a my
drive to a shared drive using --metadata caused this error:

    Sharing restrictions cannot be set on a shared drive item., teamDrivesSharingRestrictionNotAllowed

This was because we were setting the "writers-can-share" metadata
which isn't allowed on shared drives
2024-03-26 11:16:22 +00:00
Nick Craig-Wood
d11fe9779e drive: stop sending notification emails when setting permissions 2024-03-26 11:11:18 +00:00
iotmaestro
4b5c10f72e
Add a new backend for uloz.to
Note that this temporarily skips uploads of files over 2.5 GB.

See https://github.com/rclone/rclone/pull/7552#issuecomment-1956316492
for details.
2024-03-26 09:46:47 +00:00
gvitali
d9601c78b1 linkbox: fix list paging and optimized synchronization.
1. The maximum number of objects on a page should be no more than
1000. Currently it is 1024, for this reason the listing always ends on
the first page with the error “object not found”, rclone tries to
upload the file again, Linkbox stores it with the name “filename(N)”,
and so the storage fills up indefinitely.

2. A hyphen is added to the list of allowed characters, that makes
queries more optimized (no need to load all files in a directory for
an entity with a hyphen).
2024-03-24 12:05:58 +00:00
Vitaly
4258ad705e linkbox: fix working with names longer than 8-25 Unicode chars.
The LinkBox API does not allow searching by more than 25 Unicode
characters in the name, for this reason it is currently impossible to
work with files and folders named longer than 8 Unicode chars (if
encoded in base32).

This fix queries all files in a directory for long names and checks
their names one by one, thus solving the issue.

Fixes #7542
2024-03-24 12:05:58 +00:00
Pat Patterson
070cff8a65 b2: Add new cleanup and cleanup-hidden backend commands. 2024-03-23 18:07:02 +00:00
hoyho
a24aeba495 s3: validate CopyCutoff size before copy
Signed-off-by: hoyho <luohaihao@gmail.com>
2024-03-23 15:09:38 +00:00
Lewis Hook
bf494d48d6 Improve error messages when objects have been corrupted on transfer - fixes #5268 2024-03-23 12:35:35 +00:00
Nick Craig-Wood
aee8d909b3 onedrive: fix "unauthenticated: Unauthenticated" errors when downloading
Before this change we would pass the Authorization header on to the
download server. This is allowed according to the docs, but on some
onedrive servers this sometimes causes an error with the text
"unauthenticated: Unauthenticated".

This is a similar fix to

dedad9f071 onedrive: fix "unauthenticated: Unauthenticated" errors when uploading

See: https://forum.rclone.org/t/cryptcheck-on-encrypted-onedrive-personal-failed-with-unauthenticated-error/44581/
2024-03-23 12:08:35 +00:00
YukiUnHappy
f68d962c86 onedrive: make server-side copy to work in more scenarios 2024-03-22 17:29:38 +00:00
racerole
00fb847662
docs: remove repeated words 2024-03-13 17:12:39 +00:00
Thomas Müller
c7bfadd10a owncloud: add config owncloud_exclude_mounts which allows to exclude mounted folders when listing remote resources 2024-03-13 17:09:10 +00:00
John-Paul Smith
ca903b9872
drive: backend query command
This command executes a list query in Google Drive’s native query
language and returns a JSON dump of matches. It’s useful for locating
files quickly in folders with a large number of files, where rclone’s
normal list command is slow due to client-side filtering.
2024-03-11 20:16:13 +00:00
nielash
9b650d3517 hasher: look for cached hash if passed hash unexpectedly blank
Before this change, Hasher did not check whether a "passed hash" (hashtype
natively supported by the wrapped backend) returned from a backend was blank,
and would sometimes return a blank hash to the caller even when a non-blank hash
was already stored in the db. This caused issues with, for example, Google
Drive, which has SHA1 / SHA256 hashes for some files but not others
(https://rclone.org/drive/#sha1-or-sha256-hashes-may-be-missing) and sometimes also
does not have hashes for very recently modified files.

After this change, Hasher will check if the received "passed hash" is
unexpectedly blank, and if so, it will continue to try other enabled methods,
such as retrieving a value from the database, or possibly regenerating it.

https://forum.rclone.org/t/hasher-with-gdrive-backend-does-not-return-sha1-sha256-for-old-files/44680/9?u=nielash
2024-03-09 11:58:02 +00:00
nielash
ff0acfb568 hasher: fix error from trying to stop an already-stopped db
Before this change, Hasher would sometimes try to stop a bolt db that was
already stopped, resulting in an error. This change fixes the issue by checking
first whether the db is already stopped.

https://forum.rclone.org/t/hasher-with-gdrive-backend-does-not-return-sha1-sha256-for-old-files/44680/11?u=nielash
2024-03-09 11:58:02 +00:00
nielash
1473de3f04 onedrive: add metadata support
This change adds support for metadata on OneDrive. Metadata (including
permissions) is supported for both files and directories.

OneDrive supports System Metadata (not User Metadata, as of this writing.) Much
of the metadata is read-only, and there are some differences between OneDrive
Personal and Business (see table in OneDrive backend docs for details).

Permissions are also supported, if --onedrive-metadata-permissions is set. The
accepted values for --onedrive-metadata-permissions are read, write, read,write, and
off (the default). write supports adding new permissions, updating the "role" of
existing permissions, and removing permissions. Updating and removing require
the Permission ID to be known, so it is recommended to use read,write instead of
write if you wish to update/remove permissions.

Permissions are read/written in JSON format using the same schema as the
OneDrive API, which differs slightly between OneDrive Personal and Business.
(See OneDrive backend docs for examples.)

To write permissions, pass in a "permissions" metadata key using this same
format. The --metadata-mapper tool can be very helpful for this.

When adding permissions, an email address can be provided in the User.ID or
DisplayName properties of grantedTo or grantedToIdentities. Alternatively, an
ObjectID can be provided in User.ID. At least one valid recipient must be
provided in order to add a permission for a user. Creating a Public Link is also
supported, if Link.Scope is set to "anonymous".

Note that adding a permission can fail if a conflicting permission already
exists for the file/folder.

To update an existing permission, include both the Permission ID and the new
roles to be assigned. roles is the only property that can be changed.

To remove permissions, pass in a blob containing only the permissions you wish
to keep (which can be empty, to remove all.)

Note that both reading and writing permissions requires extra API calls, so if
you don't need to read or write permissions it is recommended to omit --onedrive-
metadata-permissions.

Metadata and permissions are supported for Folders (directories) as well as
Files. Note that setting the mtime or btime on a Folder requires one extra API
call on OneDrive Business only.

OneDrive does not currently support User Metadata. When writing metadata, only
writeable system properties will be written -- any read-only or unrecognized keys
passed in will be ignored.

TIP: to see the metadata and permissions for any file or folder, run:

rclone lsjson remote:path --stat -M --onedrive-metadata-permissions read

See the OneDrive backend docs for a table of all the supported metadata
properties.
2024-03-08 14:48:54 +00:00
Nick Craig-Wood
bda4f25baa s3: support metadata setting and mapping on server side Copy
Before this change the backend would not run the metadata mapper and
it would ignore metadata set when doing server side copies.
2024-03-07 14:44:45 +00:00
Nick Craig-Wood
9f2ce2c7fc drive: support metadata setting and mapping on server side Move,Copy
Before this change the backend would not run the metadata mapper and
it would ignore metadata set when doing server side moves or copies.
2024-03-07 14:44:45 +00:00
Nick Craig-Wood
6e85a39e99 local: support metadata setting and mapping on server side Move
Before this change the backend would not run the metadata mapper and
it would ignore metadata set when doing server side moves.
2024-03-07 14:44:45 +00:00
Nick Craig-Wood
339d3e8ee6 netstorage,quatrix,seafile: fix Root to return correct directory when pointing to a file
This fixes the TestIntegration/FsMkdir/FsPutFiles/FsIsFile/FsRoot
integration test.
2024-03-07 14:44:45 +00:00
Nick Craig-Wood
5750795324 protondrive: fix encoding of Root method
This fixes the TestIntegration/FsMkdir/FsPutFiles/FsIsFile/FsRoot
integration test.
2024-03-07 14:44:45 +00:00
huajin tong
b1ae7df556
docs: fix some comments
Signed-off-by: thirdkeyword <fliterdashen@gmail.com>
2024-03-07 12:57:15 +00:00
nielash
252562d00a combine: fix CopyDirMetadata error on upstream root
Before this change, operations.CopyDirMetadata would fail with: `internal error:
expecting directory string from combine root '' to have SetMetadata method:
optional feature not implemented` if the dst was the root directory of a combine
upstream. This is because combine was returning a *fs.Dir, which does not
satisfy the fs.SetMetadataer interface.

While it is true that combine cannot set metadata on the root of an upstream
(see also #7652), this should not be considered an error that causes sync to do
high-level retries, abort without doing deletes, etc.

This change addresses the issue by creating a new type of DirWrapper that is
allowed to fail silently, for exceptional cases such as this where certain
special directories have more limited abilities than what the Fs usually
supports.

It is possible that other similar wrapping backends (Union?) may need this same
fix.
2024-03-07 11:09:07 +00:00
Nick Craig-Wood
1693d7ad0f sftp: set DirModTimeUpdatesOnWrite to fix integration tests 2024-03-01 11:29:08 +00:00
Nick Craig-Wood
6e28edeb9a cache: fix crash in tests which assumed local could Purge 2024-02-29 17:55:36 +00:00
Nick Craig-Wood
6ff1b6c505 local: delete backend implementation of Purge to speed up and make stats
In this commit (2014 for v1.02) Purge was implemented for the local
backend:

1527e64ee7 local: Implement Purger interface

This appeared to be implemented just to make a Purge and doesn't
appear to do anything useful.

It is in fact significatly worse than the rclone fallback purge since
it doesn't operate in parallel or update stats.

This patch removes the Purge routine for a consequent speed up and
showing of stats.

See: https://forum.rclone.org/t/progress-flag-for-rclone-purge/44416
2024-02-29 15:04:51 +00:00
Nick Craig-Wood
186bb85c44 crypt: add missing error check spotted by linter 2024-02-29 14:46:50 +00:00
nielash
4c6d2c5410 crypt: improve handling of undecryptable file names - fixes #5787 fixes #6439 fixes #6437
Before this change, undecryptable file names would be skipped very quietly
(there was a log warning, but only at DEBUG level),
failing to alert users of a potentially serious issue that needs attention.

After this change, the log level is raised to NOTICE by default and a new
--crypt-strict-names flag allows raising an error, for users who may prefer not
to proceed if such an issue is detected.

See https://forum.rclone.org/t/skipping-undecryptable-file-name-should-be-an-error/27115
https://github.com/rclone/rclone/issues/5787
2024-02-29 12:11:02 +00:00
Nick Craig-Wood
e4d0055b3e drive: implement modtime and metadata setting for directories 2024-02-28 16:26:14 +00:00
Nick Craig-Wood
a60da2ef38 local: fix setting of btime on directories on Windows
Before this change this would give errors like this

    failed to set metadata on directory: failed to set birth (creation) time: Access is denied.

This was caused by opening the directory in the wrong mode.
2024-02-28 16:25:59 +00:00
Nick Craig-Wood
7b01564f83 local: implement modtime and metadata for directories
A consequence of this is that fs.Directory returned by the local
backend will now have a correct size in (rather than -1). Some tests
depended on this and have been fixed by this commit too.
2024-02-28 16:09:04 +00:00
Nick Craig-Wood
39db8caff1 cache,chunker,combine,compress,crypt,hasher,union: implement MkdirMetadata and related Features 2024-02-28 16:09:04 +00:00
nielash
0297542f6b cache,chunker,combine,compress,crypt,hasher,union: implement DirSetModTime (if supported by wrapped remote) 2024-02-28 16:09:04 +00:00
nielash
17c0ecc72c sftp: implement DirSetModTime 2024-02-28 16:09:04 +00:00
nielash
cbcb295185 drive: implement DirSetModTime 2024-02-27 19:59:13 +00:00
nielash
67e3725205 local: implement DirSetModTime 2024-02-27 19:59:13 +00:00
nielash
9d2bd163c7 opendrive: fix moving file/folder within the same parent dir - #7591
Before this change, moving (renaming) a file or folder to a different name
within the same parent directory would fail, due to using the wrong API
operation ("/file/move_copy.json" and "/folder/move_copy.json", instead of the
separate "/file/rename.json" and "/folder/rename.json" that opendrive has for
this purpose.)

After this change, Move and DirMove check whether the move is within the same
parent dir. If so, "rename" is used. If not, "move_copy" is used, like before.
2024-02-21 18:02:19 +00:00
Anders Swanson
db8fb5ceda oracleobjectstorage: supports workload identity authentication for OKE
Signed-off-by: Anders Swanson <anders.swanson@oracle.com>
2024-02-20 16:25:59 +00:00
Joe Cai
a1e66cc5e8 swift: Avoid unnecessary container versioning check
Container versioning check is only needed for non-empty large objects.
2024-02-20 15:52:25 +00:00
Oksana Zhykina
11c6489fd1 quatrix: add option to skip project folders 2024-02-18 07:38:19 +01:00
Gabriel Ramos
43823bc925
webdav: reduce priority of chunks upload log 2024-02-18 07:29:23 +01:00
nielash
9c6325c131 backend: rename variables to fix CI lint test failures 2024-02-12 12:49:00 -05:00
Volodymyr
2abeda5961
quatrix: fix Content-Range header
This change does not actually affect uploads. Just to be right according to definition of Content-Range in
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Range#range-end
2024-02-09 16:44:45 +00:00
Nick Craig-Wood
83f61a9cfb s3: GCS provider: fix server side copy of files bigger than 5G
GCS gives NotImplemented errors for multi-part server side copies. The
threshold for these is currently set just below 5G so any files bigger
than 5G that rclone attempts to server side copy will fail.

This patch works around the problem by adding a quirk for GCS raising
--s3-copy-cutoff to the maximum. This means that rclone will never use
multi-part copies for files in GCS. This includes files bigger than
5GB which (according to AWS documentation) must be copied with
multi-part copy. However this seems to work with GCS.

See: https://forum.rclone.org/t/chunker-uploads-to-gcs-s3-fail-if-the-chunk-size-is-greater-than-the-max-part-size/44349/
See: https://issuetracker.google.com/issues/323465186
2024-02-08 14:53:30 +00:00
Nick Craig-Wood
b206496f63 b2: clarify exactly what --b2-download-auth-duration does in the docs
See: https://forum.rclone.org/t/what-does-b2-download-auth-duration-mean/44504/
2024-02-08 09:39:53 +00:00
Nick Craig-Wood
24fdecf107 ftp: fix mkdir with rsftp which is returning the wrong code
On a successfull MKD, rsftp seems to return code 250 whereas we and
the RFC expects 257.

This patch makes rclone accept 250 here as well.

See: https://forum.rclone.org/t/rclone-pop-up-an-i-o-error-when-creating-a-folder-in-a-mounted-ftp-drive/44368/3
2024-02-07 22:09:56 +00:00
DanielEgbers
a0dff2dd9c
Seafile: Fix download/upload error when FILE_SERVER_ROOT is relative
A seafile server can be configured to use a relative URL as
FILE_SERVER_ROOT in order to support more than one hostname/ip. (see
https://github.com/haiwen/seahub/issues/3398#issuecomment-506920360 )

The previous backend implementation always expected an absolute
download/upload URL, resulting in an "unsupported protocol scheme"
error.

With this commit it supports both absolute and relative.
2024-02-05 11:48:51 +00:00
Thomas Müller
99b9062551 owncloud: add config owncloud_exclude_shares which allows to exclude shared files and folders when listing remote resources 2024-01-31 14:47:24 +00:00
Nick Craig-Wood
03295bbc3c azureblob: fix data corruption bug #7590
It was reported that rclone copy occasionally uploaded corrupted data
to azure blob.

This turned out to be a race condition updating the block count which
caused blocks to be duplicated.

This bug was introduced in this commit in v1.64.0 and will be fixed in v1.65.2

0427177857 azureblob: implement OpenChunkWriter and multi-thread uploads #7056

This race only seems to happen if `--checksum` is used but can happen otherwise.

Unfortunately Azure blob does not check the MD5 that we send them so
despite sending incorrect data this corruption is not detected. The
corruption is detected when rclone tries to download the file, so
attempting to copy the files back to local disk will result in errors
such as:

    ERROR : file.pokosuf5.partial: corrupted on transfer: md5 hash differ "XXX" vs "YYY"

This adds a check to test the blocklist we upload is as we expected
which would have caught the problem had it been in place earlier.
2024-01-24 11:28:05 +00:00
nielash
7f854acb05 local: fix cleanRootPath on Windows after go1.21.4 stdlib update
Similar to
acf1e2df84,
go1.21.4 appears to have broken sync.MoveDir on Windows because
filepath.VolumeName() returns `\\?` instead of `\\?\C:` in cleanRootPath. It
looks like the Go team is aware of the issue and planning a fix, so this may
only be needed temporarily.
2024-01-20 14:50:08 -05:00
Nick Craig-Wood
da244a3709 ssh: shorten wait delay for external ssh binaries now that we are using go1.20
Now we are guaranteed to have go1.20 or later we can use the WaitDelay
flag when running external ssh binaries.
2024-01-15 16:22:07 +00:00
Nick Craig-Wood
519fe98e6e azureblob: implement --azureblob-delete-snapshots
This flag controls what happens when we try to delete a blob with a
snapshot. The UI follows the azcopy tool.

See: https://forum.rclone.org/t/how-to-delete-undeleted-blobs-on-azure/43911/
2024-01-13 14:27:54 +00:00
Nikhil Ahuja
1045f54128 oracleobjectstorage: Support "backend restore" command - fixes #7371 2024-01-09 09:43:36 +00:00
Nick Craig-Wood
dedad9f071 onedrive: fix "unauthenticated: Unauthenticated" errors when uploading
Before this change, sometimes when uploading files the onedrive
servers return 401 Unauthorized errors with the text "unauthenticated:
Unauthenticated".

This is because we are sending the Authorization header with the
request and it says in the docs that we shouldn't.

https://learn.microsoft.com/en-us/graph/api/driveitem-createuploadsession?view=graph-rest-1.0#remarks

> If you include the Authorization header when issuing the PUT call,
> it may result in an HTTP 401 Unauthorized response. Only send the
> Authorization header and bearer token when issuing the POST during
> the first step. Don't include it when you issue the PUT call.

This patch fixes the problem by doing the PUT request with an
unauthenticated client.

Fixes #7405
See: https://forum.rclone.org/t/onedrive-unauthenticated-when-trying-to-copy-sync-but-can-use-lsd/41149/
See: https://forum.rclone.org/t/onedrive-unauthenticated-issue/43792/
2024-01-07 11:14:08 +00:00
Nick Craig-Wood
1f6271fa15 s3: copy parts in parallel when doing chunked server side copy
Before this change rclone copied each chunk serially.

After this change it does --s3-upload-concurrency at once.

See: https://forum.rclone.org/t/transfer-big-files-50gb-from-s3-bucket-to-another-s3-bucket-doesnt-starts/43209
2024-01-05 15:54:52 +00:00
Nick Craig-Wood
c16c22d6e1 s3: fix crash if no UploadId in multipart upload
Before this change if the S3 API returned a multipart upload with no
UploadId then rclone would crash.

This detects the problem and attempts to retry the multipart upload
creation.

See: https://forum.rclone.org/t/panic-runtime-error-invalid-memory-address-or-nil-pointer-dereference/43425
2024-01-05 15:52:52 +00:00
Nick Craig-Wood
0e746f25a3 amazonclouddrive: remove Amazon Drive backend code and docs #7539
The Amazon Drive backend is closed from 2023-12-31.

See: https://www.amazon.com/b?ie=UTF8&node=23943055011
2024-01-04 17:05:54 +00:00
Nick Craig-Wood
208e49ce4b fs: update use of math/rand to modern practice 2024-01-03 16:14:40 +00:00
Nick Craig-Wood
a3d19942bd googlephotos: fix nil pointer exception when batch failed
This was a simple error check that was missing. Interestingly the
errcheck linter did not spot this.

See: https://forum.rclone.org/t/invalid-memory-address-or-nil-pointer-dereference-error-when-copy-to-google-photos/43634/
2024-01-03 10:57:59 +00:00
nielash
3ca766b2f1 hasher: fix invalid memory address error when MaxAge == 0
When f.opt.MaxAge == 0, f.db is never set, however several methods later assume
it is set and attempt to access it, causing an invalid memory address error.
This change fixes the issue in a few spots (there may still be others I haven't
yet encountered.)
2024-01-02 18:14:01 +00:00
Oksana
8503282a5a
azure-files: fix storage base url
Documented in https://learn.microsoft.com/en-us/azure/storage/common/storage-account-overview
2023-12-18 14:15:13 +00:00
Manoj Ghosh
743ea6ac26
oracle object storage: fix object storage endpoint for custom endpoints 2023-12-15 10:13:35 +00:00
Nick Craig-Wood
c69eb84573 chunker,compress,crypt,hasher,union: fix rclone move a file over itself deleting the file
This fixes the Root() returned by the backend when it has returned
fs.ErrorIsFile.

Before this change it returned a root which included the file path.

Because Root() was wrong this caused the detection of the file being
moved over itself check to fail.

This adds an integration test to check it for all backends.

See: https://forum.rclone.org/t/rclone-move-chunker-dir-file-chunker-dir-deletes-all-file-chunks/43333/
2023-12-10 22:29:57 +00:00
rkonfj
3f159bac16 backend: fs implements the Shutdowner interface
Since `tokenRenewer` adds a Shutdown method, we should call it to
clean up resources.

changes backends:
onedrive,box,pcloud,amazonclouddrive,hidrive,jottacloud,sharefile
,premiumizeme

Signed-off-by: rkonfj <rkonfj@gmail.com>
2023-12-09 11:44:50 +00:00
Nick Craig-Wood
f45cee831f dropbox: fix used space on dropbox team accounts
Before this change we were not using the used space from the team
stats.

This patch uses that as the used space if available as it seems to
include the user stats in it.

See: https://forum.rclone.org/t/rclone-about-with-dropbox-reporte-size-incorrectly/43269/
2023-12-08 14:26:46 +00:00
Anthony Metzidis
9fe343b725 s3: S3 IPv6 support with option "use_dual_stack" (bool)
dualstack_endpoint=true enables IPv6 DNS lookup for S3 endpoints
in s3.go, add Options.DualstackEndpoint to support IPv6 on S3
2023-12-08 11:11:47 +00:00
Nick Craig-Wood
f0c774156e onedrive: fix error listing: unknown object type <nil>
This error was introduced in this commit when refactoring the list
routine.

b8591b230d onedrive: implement ListR method which gives --fast-list support

The error was caused by OneNote files not being skipped properly.
2023-12-02 10:49:15 +00:00
Manoj Ghosh
9061e81850 multipart copy create bucket if it doesn't exist. 2023-11-29 15:47:56 +00:00
halms
58339845f4 smb: fix shares not listed by updating go-smb2
Before this change the IP address of the server was used in the SMB
connect request (see CloudSoda/go-smb2#18).
The updated library now can pass the hostname instead.

The update requires a small change in the dial method call.

Fixes rclone#6672
2023-11-29 15:39:27 +00:00
Nick Craig-Wood
4d4f3de5a5 s3: add --s3-version-deleted to show delete markers in listings when using versions.
See: https://forum.rclone.org/t/s3-object-deletion-times/42781
2023-11-29 09:44:40 +00:00
Nick Craig-Wood
74d5477fad onedrive: add --onedrive-delta flag to enable ListR
Before this change ListR was unconditionally enabled on onedrive.

This caused performance problems for some uses, so now the
--onedrive-delta flag has to be supplied.

Fixes #7362
2023-11-26 16:06:49 +00:00
Nick Craig-Wood
b5857f0bf8 smb: fix modtime of multithread uploads by setting PartialUploads
Before this change PartialUploads was not set. This is clearly wrong
since incoming files are visible on the smb server.

Setting PartialUploads fixes the multithread upload modtime problem as
it uses the PartialUploads flag as an indication that it needs to set
the modtime explicitly.

This problem was detected by the new TestMultithreadCopy integration
tests

Fixes #7411
2023-11-25 18:46:48 +00:00
Nick Craig-Wood
edb5ccdd0b smb: fix about size wrong by switching to github.com/cloudsoda/go-smb2/ fork
Before this change smb drives sometimes showed a fraction of the
correct size using `rclone about`.

This fixes the problem by switching the upstream library from
github.com/hirochachacha/go-smb2 to github.com/cloudsoda/go-smb2 which
has a fix for the problem.

The new library passes the integration tests.

Fixes #6733
2023-11-25 18:45:41 +00:00
Abhinav Dhiman
36eb3cd660
imagekit: Added ImageKit backend 2023-11-24 18:18:01 +00:00
Nick Craig-Wood
4eed3ae99a s3: ensure we can set upload cutoff that we use for Rclone provider
This is a workaround to make the new multipart upload integration
tests pass.
2023-11-24 16:32:06 +00:00
Nick Craig-Wood
8f47b6746d b2: fix streaming chunked files an exact multiple of chunk size
Before this change, streaming files an exact multiple of the chunk
size would cause rclone to attempt to stream a 0 sized chunk which was
rejected by the b2 servers.

This bug was noticed by the new integration tests for chunked streaming.
2023-11-24 14:32:01 +00:00
Nick Craig-Wood
cc2a4c2e20 fstest: factor chunked streaming tests from b2 and use in all backends 2023-11-24 12:58:40 +00:00
Nick Craig-Wood
fabeb8e44e b2: fix server side chunked copy when file size was exactly --b2-copy-cutoff
Before this change the b2 servers would complain as this was only a
single part transfer.

This was noticed by the new integration tests for server side chunked copy.
2023-11-24 12:37:11 +00:00
Nick Craig-Wood
c27977d4d5 fstest: factor chunked copy tests from b2 and use them in s3 and oos 2023-11-24 12:37:11 +00:00
Nick Craig-Wood
ba11040d6b s3: detect looping when using gcs and versions
Apparently gcs doesn't return an S3 compatible result when using
versions.

In particular it doesn't return a NextKeyMarker - this means rclone
loops and fetches the same page over and over again.

This patch detects the problem and stops the infinite retries but it
doesn't fix the underlying problem.

See: https://forum.rclone.org/t/list-s3-versions-files-looping-bug/42974
See: https://issuetracker.google.com/u/0/issues/312292516
2023-11-23 09:50:28 +00:00
Nick Craig-Wood
668711e432 dropbox: fix missing encoding for rclone purge again
This commit fixed the problem but made the integration tests fail.

33376bf399 dropbox: fix missing encoding for rclone purge

This fixes the problem properly by making sure we send the encoded or
non encoded root to the right places.
2023-11-21 12:23:28 +00:00
Nick Craig-Wood
e8fcde8de1 fs: add ChunkWriterDoesntSeek feature flag and set it for b2 2023-11-20 18:07:05 +00:00
Nick Craig-Wood
bb88b8499b box: fix performance problem reading metadata for single files
Before this change the backend used to list the directory to find the
metadata for a single file. For lots of files in a directory this
caused a serious performance problem.

This change uses the preflight check to check for a files existence
and find its ID.

See: https://forum.rclone.org/t/psa-box-com-has-serious-performance-issues-in-directories-with-thousands-of-files/41128/10
See: https://forum.box.com/t/is-there-an-api-to-find-a-file-by-leaf-name-given-a-folder-id/997/
See: https://developer.box.com/guides/uploads/check/
2023-11-20 18:07:05 +00:00
Nick Craig-Wood
4ac5cb07ca gcs: fix 400 Bad request errors when using multi-thread copy
Before this change, on every Open, we added the userProject parameter
to the URL in the object.

This meant it grew and grew until Google returned Error 400 (Bad
Request) errors when the URL became too long.

This fixes the problem by adding the userProject parameter once.

See: https://forum.rclone.org/t/endlessly-repeating-userproject-parameter-in-get-to-google-storage-context-canceled-got-http-response-code-400/42652
2023-11-20 18:07:05 +00:00
Nick Craig-Wood
4a3e9bbabf http: implement set backend command to update running backend
See: https://forum.rclone.org/t/updating-the-url-of-http-remote-not-applied-on-mounts/42763
2023-11-20 18:07:05 +00:00
Nick Craig-Wood
33376bf399 dropbox: fix missing encoding for rclone purge
This was causing directories with encodable characters in not to be
found on purge.

See: https://forum.rclone.org/t/purge-command-does-not-work-on-directories-with-files/42793
2023-11-20 18:07:05 +00:00
Nick Craig-Wood
64ec5709fe drive: fix integration tests by enabling metadata support from the context
Before this change, the drive backend only used metadata if it was
created with Metadata enabled.

This patch changes it so the Metadata support is enabled dynamically
if it is set in the context.

This fixes the metadata tests in the integration tests which have been
changed to make sure Metadata is enabled.
2023-11-19 12:48:27 +00:00
Nick Craig-Wood
47ca0c326e fs: implement --metadata-mapper to transform metatadata with a user supplied program 2023-11-18 17:49:35 +00:00
Nick Craig-Wood
54196f34e3 drive: fix error updating created time metadata on existing object
Google drive doesn't allow the btime (created time) metadata to be
updated when updating an existing object.

This changes skips btime metadata if we are updating an existing
object but allows it otherwise.
2023-11-18 17:49:35 +00:00
Nick Craig-Wood
9fdf3d548a drive: add read/write metadata support
- fetch metadata with listings and fetch permissions in parallel
- only write permissions out if they are not inherited.
- make setting labels, owner and permissions work controlled by flags
    - `--drive-metadata-labels`, `--drive-metadata-owner`, `--drive-metadata-permissions`
2023-11-18 17:49:35 +00:00
Nick Craig-Wood
94a5de58c8 linkbox: pre-merge fixes
- convert to directoryCache - makes backend much more efficient
- don't force --low-level-retries to 2
- don't wrap paced calls in pacer
- fix shouldRetry
- fix file list searching mechanism
2023-11-18 17:14:45 +00:00
viktor
a466ababd0 backend: add Linkbox backend
Add backend for linkbox.io with read and write capabilities

fixes #6960 #6629
2023-11-18 17:14:45 +00:00
Nick Craig-Wood
ddaf01ece9 azurefiles: finish docs and implementation and add optional interfaces
- use rclone's http Transport
- fix handling of 0 length files
- combine into one file and remove uneeded abstraction
- make `chunk_size` and `upload_concurrency` settable
- make auth the same as azureblob
- set the Features correctly
- implement `--azurefiles-max-stream-size`
- remove arbitrary sleep on Mkdir
- implement `--header-upload`
- implement read and write MimeType for objects
- implement optional methods
    - About
    - Copy
    - DirMove
    - Move
    - OpenWriterAt
    - PutStream
- finish documentation
- disable build on plan9 and js

Fixes #365
Fixes #7378
2023-11-18 16:48:23 +00:00
karan
b5301e03a6 Implement Azure Files backend
Co-authored-by: moongdal <moongdal@tutanota.com>
2023-11-18 16:42:13 +00:00
Oksana Zhykina
6b60e09ff2 quatrix: overwrite files on conflict during server-side move 2023-11-16 17:14:00 +00:00
Oksana Zhykina
41a52f50df quatrix: add partial upload support 2023-11-16 17:14:00 +00:00
Nick Craig-Wood
93f35c915a serve s3: pre-merge tweaks
- Changes
    - Rename `--s3-authkey` to `--auth-key` to get it out of the s3 backend namespace
    - Enable `Content-MD5` integrity checks
    - Remove locking after code audit
- Documentation
    - Factor out documentation into seperate file
    - Add Quickstart to docs
    - Add Bugs section to docs
    - Add experimental tag to docs
    - Add rclone provider to s3 backend docs
- Fixes
    - Correct quirks in s3 backend
    - Change fmt.Printlns into fs.Logs
    - Make metadata storage per backend not global
    - Log on startup if anonymous access is enabled
- Coding style fixes
    - rename fs to vfs to save confusion with the rest of rclone code
    - rename db to b for *s3Backend

Fixes #7062
2023-11-16 16:59:56 +00:00
Mikubill
23abac2a59 serve s3: let rclone act as an S3 compatible server 2023-11-16 16:59:55 +00:00
Nick Craig-Wood
d3ba32c43e s3: add --s3-disable-multipart-uploads flag 2023-11-16 16:59:55 +00:00
Tayo-pasedaRJ
0548e61910
hdfs: added support for list of namenodes in hdfs remote config
Users can now input a comma separated list of namenodes when writing
config for hdfs remotes.

This is required when you have multiple namenodes in your hdfs cluster
and cannot be certain which namenodes will be in 'standby' or 'active'
states.

This was available before but wasn't documented and didn't use the
correct rclone interfaces.
2023-11-13 15:55:52 +00:00
Adithya Kumar
ad83ff769b
webdav: added an rclone vendor to work with rclone serve webdav
Fixes #7160
2023-11-05 12:37:25 +00:00
Nick Craig-Wood
bf21db0ac4 b2: fix multi-thread upload with copyto going to wrong name
See: https://forum.rclone.org/t/errors-and-failure-with-big-file-upload-to-b2/42522/
2023-10-28 15:18:00 +01:00
Nick Craig-Wood
adfb1f7c7d b2: fix error handler to remove confusing DEBUG messages
On a 404 error, b2 returns an empty body which, before this change,
caused the error handler to try to parse an empty string and give the
following DEBUG message:

    Couldn't decode error response: EOF

This is confusing as it is expected in normal operations and isn't an
error.

This change reads the body of an error response first then tries to
decode it only if it isn't empty, which avoids the confusing DEBUG
message.

This also upgrades failure to read the body or failure to decode the
JSON to ERROR messages as now we are certain that we should have
something to read and decode.
2023-10-28 15:18:00 +01:00
Nick Craig-Wood
6092fe2aaa s3: emit a debug message if anonymous credentials are in use
This can indicate the user is expecting `env_auth=true` to be the
default so we say that in the debug message.

See: https://forum.rclone.org/t/rclone-with-amazon-s3-access-point/42411
2023-10-27 16:00:47 +01:00
Nick Craig-Wood
750ed556a5 build: fix new lint errors with golangci-lint v1.55.0 2023-10-20 18:53:30 +01:00
Nick Craig-Wood
5b0f9dc4e3 local: fix copying from Windows Volume Shadows
For some files the Windows Volume Shadow Service (VSS) advertises the
file size as X in the directory listing but returns a different number
Y on stat-ing the file. If the file is opened and read there are Y
bytes available for reading.

Existing copy tools copy Y bytes rather than X so for consistency
rclone should do the same.

This fixes the problem by stat-ing the file immediately before opening
it. This will also reduce the unnecessary occurrence of "can't copy -
source file is being updated" errors; if the file has finished
changing by the time we come to copy it then we now can copy it
successfully.

See: https://forum.rclone.org/t/consistently-getting-corrupted-on-transfer-sizes-differ-syncing-to-an-smb-share/42218/
2023-10-19 16:38:10 +01:00
Ivan Yanitra
0ee6d0b4bf azureblob: add support cold tier 2023-10-18 17:54:25 +01:00
Keigo Imai
4ac4597afb drive: add a note that --drive-scope accepts comma-separated list of scopes 2023-10-18 17:54:08 +01:00
Nick Craig-Wood
c7a2719fac sftp: implement --sftp-copy-is-hardlink to server side copy as hardlink
If the server does not support hardlinks then it falls back to normal
copy.

See: https://forum.rclone.org/t/sftp-remote-server-side-copy/41867
2023-10-16 12:08:22 +01:00
Nick Craig-Wood
5fa68e9ca5 b2: fix chunked streaming uploads
Streaming uploads are used by rclone rcat and rclone mount
--vfs-cache-mode off.

After the multipart chunker refactor the multipart chunked streaming
upload was accidentally mixing the first and the second parts up which
was causing corrupted uploads.

This was caused by a simple off by one error in the refactoring where
we went from 1 based part number counting to 0 based part number
counting.

Fixing this revealed that the metadata wasn't being re-read for the
copied object either.

This fixes both of those issues and adds an integration tests so it
won't happen again.

Fixes #7367
2023-10-13 15:46:36 +01:00
Nick Craig-Wood
d8d76ff647 b2: fix server side copies greater than 4GB
After the multipart chunker refactor the multipart chunked server side
copy was accidentally sending one part too many. The last part was 0
length which was rejected by b2.

This was caused by a simple off by one error in the refactoring where
we went from 1 based part number counting to 0 based part number
counting.

Fixing this revealed that the metadata wasn't being re-read for the
copied object either.

This fixes both of those issues and adds an integration tests so it
won't happen again.

See: https://forum.rclone.org/t/large-server-side-copy-in-b2-fails-due-to-bad-byte-range/42294
2023-10-12 11:19:56 +01:00
Nick Craig-Wood
f56ea2bee2 s3: fix no error being returned when creating a bucket we don't own
Before this change if you tried to create a bucket that already
existed, but someone else owned then rclone did not return an error.

This now will return an error on providers that return the
AlreadyOwnedByYou error code or no error on bucket creation of an
existing bucket owned by you.

This introduces a new provider quirk and this has been set or cleared
for as many providers as can be tested. This can be overridden by the
--s3-use-already-exists flag.

Fixes #7351
2023-10-09 18:15:02 +01:00
Nick Craig-Wood
d6ba60c04d oracleobjectstorage: fix OpenOptions being ignored in uploadMultipart with chunkWriter 2023-10-09 17:13:42 +01:00
Vitor Gomes
37eaa3682a s3: fix OpenOptions being ignored in uploadMultipart with chunkWriter 2023-10-09 17:12:56 +01:00
Nick Craig-Wood
c5f6fc3283 drive: add --drive-show-all-gdocs to allow unexportable gdocs to be server side copied
Before this change, attempting to server side copy a google form would
give this error

    No export formats found for "application/vnd.google-apps.form"

Adding this flag allows the form to be server side copied but not
downloaded.

Fixes #6302
2023-10-09 16:53:03 +01:00
Nick Craig-Wood
3ab9077820 googlephotos: implement batcher for uploads - fixes #6920 2023-10-03 18:01:34 +01:00
Nick Craig-Wood
b94806a143 dropbox: factor batcher into lib/batcher 2023-10-03 18:01:34 +01:00
Nick Craig-Wood
b8591b230d onedrive: implement ListR method which gives --fast-list support
This implents ListR for onedrive. The API only allows doing this at
the root so it is inefficient to use it not at the root.

Fixes #7317
2023-10-02 11:12:08 +01:00
Nick Craig-Wood
ecb09badba onedrive: factor API types back into correct file 2023-10-02 10:48:06 +01:00
Nick Craig-Wood
cb43e86d16 b2: reduce default --b2-upload-concurrency to 4 to reduce memory usage
In v1.63 memory usage in the b2 backend was limited to `--transfers` *
`--b2-chunk-size`

However in v1.64 this was raised to `--transfers` * `--b2-chunk-size`
* `--b2-upload-concurrency`.

The default value for this was accidently set quite high at 16 which
means by default rclone could use up to 6.4GB of memory!

The new default sets a more reasonable (but still high) max memory of 1.6GB.
2023-10-01 12:30:26 +01:00
Nick Craig-Wood
5c48102ede b2: fix locking window when getting mutipart upload URL
Before this change, the lock was held while the upload URL was being
fetched from the server.

This meant that any other threads were blocked from getting upload
URLs unecessarily.

It also increased the potential for deadlock.
2023-10-01 12:30:26 +01:00
albertony
19ad39fa1c jottacloud: add support for reading and writing metadata
Most useful is the addition of the file created timestamp, but also a timestamp for
when the file was uploaded.

Currently supporting a rather minimalistic set of metadata items, see PR #6359 for
some thoughts about possible extensions.
2023-09-29 13:19:57 +02:00
Nick Craig-Wood
b296f37801 s3: fix slice bounds out of range error when listing
In this commit:

5f938fb9ed s3: fix "Entry doesn't belong in directory" errors when using directory markers

We checked that the remote has the prefix and then changed the remote
before removing the prefix. This sometimes causes:

    panic: runtime error: slice bounds out of range [56:55]

The fix is to do the modification of the remote after removing the
prefix.

See: https://forum.rclone.org/t/cryptcheck-panic-runtime-error-slice-bounds-out-of-range/41977
2023-09-25 11:52:23 +01:00
rinsuki
8fd66daab6 drive: add support of SHA-1 and SHA-256 checksum 2023-09-24 17:38:30 +01:00
Nick Craig-Wood
9e80d48b03 s3: add docs on how to add a new provider 2023-09-23 14:36:48 +01:00
Nick Craig-Wood
eb3082a1eb s3: add Linode provider 2023-09-23 14:34:00 +01:00
Nick Craig-Wood
77ea22ac5b s3: Factor providers list out and auto generate textual version 2023-09-23 14:34:00 +01:00
Dimitri Papadopoulos Orfanos
3d473eb54e
docs: fix typos found by codespell in docs and code comments 2023-09-23 12:20:01 +01:00
Nick Craig-Wood
50b4a2398e onedrive: fix the configurator to allow /teams/ID in the config
See: https://forum.rclone.org/t/sharepoint-to-google/41548/
2023-09-22 15:54:22 +01:00
Nick Craig-Wood
6072d314e1 b2: fix multipart upload: corrupted on transfer: sizes differ XXX vs 0
Before this change the b2 backend wasn't writing the metadata to the
object properly after a multipart upload.

The symptom of this was that sometimes it would give the error:

    corrupted on transfer: sizes differ XXX vs 0

This was fixed by returning the metadata in the chunk writer and setting it in Update.

See: https://forum.rclone.org/t/multipart-upload-to-b2-sometimes-failing-with-corrupted-on-transfer-sizes-differ/41829
2023-09-18 20:41:31 +01:00
Nick Craig-Wood
9277ca1e54 b2: implement --b2-lifecycle to control lifecycle when creating buckets 2023-09-16 17:01:43 +01:00
Nick Craig-Wood
d6722607cb b2: implement "rclone backend lifecycle" to read and set bucket lifecycles 2023-09-16 16:44:28 +01:00
Nick Craig-Wood
4ef30db209 b2: fix listing all buckets when not needed
Before this change the b2 backend listed all the buckets to turn a
single bucket name into an ID.

However in July 26, 2018 a parameter was added to the list buckets API
to make listing all the buckets unecessary.

This code sets the bucketName parameter so that only the results for
the desired bucket are returned.
2023-09-16 16:04:50 +01:00
Nick Craig-Wood
55c12c9a2d azureblob: fix "fatal error: concurrent map writes"
Before this change, the metadata map could be accessed from multiple
goroutines at once, sometimes causing this error.

This fix adds a global mutex for adjusting the metadata map to make
all accesses safe.

See: https://forum.rclone.org/t/azure-blob-storage-with-vfs-cache-concurrent-map-writes-exception/41686
2023-09-16 11:33:03 +01:00
David Sze
3e63f2c249 box: add more logging for polling 2023-09-15 10:24:43 +01:00
David Sze
5118ab9609 box: filter more EventIDs when polling 2023-09-15 10:24:43 +01:00
Kaloyan Raev
af260921c0 storj: update storj.io/uplink to v1.12.0
The improved upload logic is active by default in uplink v1.12.0, so the
`testuplink.WithConcurrentSegmentUploadsDefaultConfig(ctx)` is not
required anymore.

See https://github.com/rclone/rclone/pull/7198
2023-09-14 14:01:35 +01:00
Nick Craig-Wood
a5a61f4874 protondrive: make cached keys rclone style and not show with rclone config redacted 2023-09-11 15:57:08 +01:00
Nick Craig-Wood
d4d530bd8e drive: add --drive-fast-list-bug-fix to control ListR bug workaround
See: https://forum.rclone.org/t/how-to-list-empty-directories-recursively/40995/12
2023-09-09 17:46:03 +01:00
Nick Craig-Wood
f4b011e4e4 s3: add rclone backend restore-status command
This command shows the restore status of objects being retrieved from GLACIER.

See: https://forum.rclone.org/t/aws-s3-glacier-monitor-restore-status-command-for-glacier-restoring-process/41373/7
2023-09-09 17:44:36 +01:00
Chun-Hung Tseng
ed755bf04f
protondrive: implement two-password mode (#7279) 2023-09-08 22:54:46 +02:00
Nick Craig-Wood
7453b7d5f3 hdfs: fix retry "replication in progress" errors when uploading
Before this change uploaded files could return the error "replication
in progress".

This error is harmless though and means the Close should be retried
which is what this patch does.
2023-09-08 15:35:50 +01:00
Nick Craig-Wood
c9350149d8 hdfs: fix uploading to the wrong object on Update with overriden remote name
In this commit we discovered a problem with objects being uploaded to
the incorrect object name. It added an integration test for the
problem.

65b2e378e0 drive: fix incorrect remote after Update on object

This test was tripped by the hdfs backend and this patch fixes the
problem.
2023-09-08 15:35:50 +01:00
Oksana
628ff8e524
quatrix: add backend to support Quatrix
Co-authored-by: Volodymyr Kit <v.kit@maytech.net>
2023-09-08 14:31:29 +01:00
Chun-Hung Tseng
578c75cb1e
protondrive: fix signature verification logic by accounting for legacy signing scheme (#7278) 2023-09-08 16:00:34 +08:00
zjx20
f5ee16e201 local: rmdir return an error if the path is not a dir 2023-09-07 14:30:08 +01:00
Nick Craig-Wood
2bcbed30bd s3: implement backend set command to update running config 2023-09-07 12:26:48 +01:00
Chun-Hung Tseng
5026a9171d
protondrive: improves 2fa and draft error messages (#7280) 2023-09-07 01:50:28 +08:00
Nick Craig-Wood
b750c50bfd zoho: remove Range requests workarounds to fix integration tests
Zoho are now responding to Range requests properly. The remnants of
our old workaround was breaking the integration tests so this removes
them.
2023-09-05 18:21:15 +01:00
Nick Craig-Wood
db37b3ef9e opendrive: fix List on a just deleted and remade directory
Sometimes opendrive reports "403 Folder is already deleted" on
directories which should exist.

This might be a bug in opendrive or in rclone however we work-around
here sufficient to get the tests passing.
2023-09-05 17:59:03 +01:00
Nick Craig-Wood
3ea1c5c4d2 compress: fix ChangeNotify
ChangeNotify has been broken on the compress backend for a long time!

Before this change it was wrapping the file names received rather than
unwrapping them to discover the original names.

It is likely ChangeNotify was working adequately though for users as
the VFS just uses the directories rather than the file names.
2023-09-05 17:22:36 +01:00
Nick Craig-Wood
bd23ea028e azureblob: fix purging with directory markers 2023-09-05 17:07:44 +01:00
Nick Craig-Wood
4fbe0652c9 compress: fix integration tests by adding missing OpenChunkWriter exclude 2023-09-04 19:26:14 +01:00
Nick Craig-Wood
47665dad07 cache: fix integration tests by adding missing OpenChunkWriter exclude 2023-09-04 19:26:14 +01:00
Nick Craig-Wood
6afd7088d3 box: add --box-impersonate to impersonate a user ID - fixes #7267 2023-09-04 12:09:54 +01:00
Nick Craig-Wood
b33140ddeb union: add :writback to act as a simple cache
This adds a :writeback tag to upstreams. If set on a single upstream
then it writes back objects not found into that upstream.

Fixes #6934
2023-09-04 12:03:26 +01:00
Nick Craig-Wood
b1c0ae5e7d azureblob: fix creation of directory markers
This also fixes the integration tests which is why we didn't notice this before!
2023-09-03 18:09:31 +01:00
Nick Craig-Wood
be17f1523a b2: fix ChunkWriter size return 2023-09-03 13:53:11 +01:00
Nick Craig-Wood
bb58040d9c s3: fix multpart streaming uploads of 0 length files 2023-09-03 12:37:20 +01:00
Nick Craig-Wood
2db0e23584 backends: change OpenChunkWriter interface to allow backend concurrency override
Before this change the concurrency used for an upload was rather
inconsistent.

- if size below `--backend-upload-cutoff` (default 200M) do single part upload.

- if size below `--multi-thread-cutoff` (default 256M) or using streaming
  uploads (eg `rclone rcat) do multipart upload using
  `--backend-upload-concurrency` to set the concurrency used by the uploader.

- otherwise do multipart upload using `--multi-thread-streams` to set the
  concurrency.

This change makes the default for the concurrency used be the
`--backend-upload-concurrency`. If `--multi-thread-streams` is set and larger
than the `--backend-upload-concurrency` then that will be used instead.

This means that if the user sets `--backend-upload-concurrency` then it will be
obeyed for all multipart/multi-thread transfers and the user can override them
all with `--multi-thread-streams`.

See: #7056
2023-09-03 11:47:05 +01:00
Alishan Ladhani
7821cb884d
b2: fix rclone link when object path contains special characters
Before this change, b2 would return an error when opening a link
generated by `rclone link`. The following error occurs when the object
path contains an ampersand that is not percent encoded:

{
  "code": "bad_request",
  "message": "Bad character in percent-encoded string: 38 (0x26)",
  "status": 400
}
2023-09-02 18:31:14 +01:00
NoLooseEnds
7487d34c33
jotta: added Telia Sky whitelabel (Norway)
Duplicated Telia Cloud (Sweden) changed the URLs and added teliase and teliano
(instead of just telia) to differentiate.

See: #5153 #5016
2023-09-01 14:55:32 +01:00
Bjørn Smith
21008b4cd5
docs: sftp: add note regarding format of server_command
Elaborate exactly how server_command should be used in the configuration file
2023-09-01 10:57:42 +01:00
Nick Craig-Wood
d12a92eac9 box: fix unhelpful decoding of error messages into decimal numbers
Before this change the box backend could make errors like

    Error "not_found" (404): On-Behalf-Of User not found ([123 34 105 110
    118 97 108 105 100 95 117 115 101 114 95 105 100 34 58 123 34 105 100
    34 58 34 48 48 48 48 48 48 48 48 48 48 48 34 125 125])

This fixes it to produce this instead

    Error "not_found" (404): On-Behalf-Of User not found ({"invalid_user_id":{"id":"00000000000"}})
2023-08-31 23:03:27 +01:00
David Sze
a603efeaf4 box: add polling support 2023-08-30 09:25:00 +01:00
Nick Craig-Wood
a83fec756b build: fix lint errors when re-enabling revive var-naming 2023-08-29 13:03:49 +01:00
Nick Craig-Wood
e953598987 build: fix lint errors when re-enabling revive exported & package-comments 2023-08-29 13:03:13 +01:00
Nick Craig-Wood
b95bda1e92 s3: fix purging of root directory with --s3-directory-markers - fixes #7247 2023-08-25 17:39:16 +01:00
Nick Craig-Wood
f992742404 s3: fix accounting for multpart uploads 2023-08-25 16:31:31 +01:00
Nick Craig-Wood
f2467d07aa oracleobjectstorage: fix accounting for multpart uploads 2023-08-25 16:31:31 +01:00
Nick Craig-Wood
d69cdb79f7 b2: fix accounting for multpart uploads 2023-08-25 16:31:31 +01:00
Manoj Ghosh
25703ad20e oracleobjectstorage: implement OpenChunkWriter and multi-thread uploads #7056 2023-08-24 12:39:28 +01:00
Nick Craig-Wood
ab803d1278 b2: implement OpenChunkWriter and multi-thread uploads #7056
This implements the OpenChunkWriter interface for b2 which
enables multi-thread uploads.

This makes the memory controls of the s3 backend inoperative; they are
replaced with the global ones.

    --b2-memory-pool-flush-time
    --b2-memory-pool-use-mmap

By using the buffered reader this fixes excessive memory use when
uploading large files as it will share memory pages between all
readers.
2023-08-24 12:39:27 +01:00
Nick Craig-Wood
0427177857 azureblob: implement OpenChunkWriter and multi-thread uploads #7056
This implements the OpenChunkWriter interface for azureblob which
enables multi-thread uploads.

This makes the memory controls of the s3 backend inoperative; they are
replaced with the global ones.

    --azureblob-memory-pool-flush-time
    --azureblob-memory-pool-use-mmap

By using the buffered reader this fixes excessive memory use when
uploading large files as it will share memory pages between all
readers.
2023-08-24 12:39:27 +01:00
Nick Craig-Wood
4c76fac594 s3: factor generic multipart upload into lib/multipart #7056
This makes the memory controls of the s3 backend inoperative and
replaced with the global ones.

    --s3-memory-pool-flush-time
    --s3-memory-pool-use-mmap

By using the buffered reader this fixes excessive memory use when
uploading large files as it will share memory pages between all
readers.

Fixes #7141
2023-08-24 12:39:27 +01:00
Nick Craig-Wood
0d0bcdac31 fs: add context.Ctx to ChunkWriter methods
WriteChunk in particular needs a different context from that which
OpenChunkWriter was used with so add it to all the methods.
2023-08-24 12:39:27 +01:00
Nick Craig-Wood
e6fde67491 s3: fix retry logic, logging and error reporting for chunk upload
- move retries into correct place into lowest level functions
- fix logging and error reporting
2023-08-24 12:39:27 +01:00
Roberto Ricci
123a030441 smb: use atomic types 2023-08-22 12:52:13 +01:00
Roberto Ricci
28ceb323ee sftp: use atomic types 2023-08-22 12:52:13 +01:00
Roberto Ricci
c624dd5c3a seafile: use atomic types 2023-08-22 12:52:13 +01:00
Roberto Ricci
a56c11753a local: use atomic types 2023-08-22 12:52:13 +01:00
Roberto Ricci
4341d472aa filefabric: use atomic types 2023-08-22 12:52:13 +01:00
Roberto Ricci
b6e7148daf box: use atomic types 2023-08-22 12:52:13 +01:00
Roberto Ricci
45458f2cdb union: use atomic types 2023-08-22 12:52:13 +01:00
Nick Craig-Wood
de147b6e54 sftp: fix --sftp-ssh looking for ssh agent - fixes #7235
Before this change if pass was empty this would attempt to connect to
the ssh agent even when using --sftp-ssh.

This patch prevents that.
2023-08-21 17:43:02 +01:00
Nick Craig-Wood
11de137660 sftp: fix spurious warning when using --sftp-ssh
When using --sftp-ssh we were warning about user/host/port even when
they were at their defaults.

See: #7235
2023-08-21 17:43:02 +01:00
Nick Craig-Wood
c979cde002 ftp: fix 425 "TLS session of data connection not resumed" errors
As an extra security feature some FTP servers (eg FileZilla) require
that the data connection re-use the same TLS connection as the control
connection. This is a good thing for security.

The message "TLS session of data connection not resumed" means that it
was not done.

The problem turned out to be that rclone was re-using the TLS session
cache between concurrent connections so the resumed TLS data
connection could from any of the control connections.

This patch makes each TLS connection have its own session cache which
should fix the problem.

This also reverts the ftp library to the upstream version which now
contains all of our patches.

Fixes #7234
2023-08-18 14:44:13 +01:00
Vitor Gomes
6dd736fbdc s3: refactor MultipartUpload to use OpenChunkWriter and ChunkWriter #7056 2023-08-12 17:55:01 +01:00
Vitor Gomes
f36ca0cd25 features: add new interfaces OpenChunkWriter and ChunkWriter #7056 2023-08-12 17:55:01 +01:00
alexia
20c9e0cab6 fichier: fix error code parsing
This fixes the following error I encountered:

```
2023/08/09 16:18:49 DEBUG : failed parsing fichier error: strconv.Atoi: parsing "#374": invalid syntax
2023/08/09 16:18:49 DEBUG : pacer: low level retry 1/10 (error HTTP error 403 (403 Forbidden) returned body: "{\"status\":\"KO\",\"message\":\"Flood detected: IP Locked #374\"}")
```
2023-08-11 00:47:01 +09:00
Chun-Hung Tseng
3c58e0efe0
protondrive: update the information regarding the advance setting enable_caching (#7202) 2023-08-09 16:01:19 +02:00
Nihaal Sangha
40de89df73
drive: fix typo in docs 2023-08-05 12:51:51 +01:00
Manoj Ghosh
27f5297e8d oracleobjectstorage: Use rclone's rate limiter in mutipart transfers 2023-08-05 12:09:17 +09:00
Kaloyan Raev
d63fcc6e44 storj: performance improvement for large file uploads
storj.io/uplink v1.11.0 comes with an improved logic for uploading large
files where file segments are uploaded concurrently instead of serially.
This allows to fully utilize the network connection during the entire
upload process.

This change enable the new upload logic.
2023-08-04 17:40:03 +09:00
Julian Lepinski
9f96c0d4ea
swift: fix HEADing 0-length objects when --swift-no-large-objects set
The Swift backend does not always respect the flag telling it to skip
HEADing zero-length objects. This commit fixes that for ls/lsl/lsf.

Swift returns zero length for dynamic large object files when they're
included in a files lookup, which means that determining their size
requires HEADing each file that returns a size of zero. rclone's
--swift-no-large-objects instructs rclone that no large objects are
present and accordingly rclone should not HEAD files that return zero
length.

When rclone is performing an ls / lsf / lsl type lookup, however, it
continues to HEAD any zero length objects it encounters, even with
this flag set. Accordingly, this change causes rclone to respect the
flag in these situations.

NB: It is worth noting that this will cause rclone to incorrectly
report zero length for any dynamic large objects encountered with the
--swift-no-large-objects flag set.
2023-08-03 08:38:39 +01:00