Commit Graph

8114 Commits

Author SHA1 Message Date
Nick Craig-Wood
1a77a2f92b configstruct: make nested config structs work 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
c156716d01 configstruct: fix parsing of invalid booleans in the config
Apparently fmt.Sscanln doesn't parse bool's properly and this isn't
likely to be fixed by the Go team who regard sscanf as a mistake.

This only uses sscan for integers and uses the correct routine for
everything else.

This also implements parsing time.Duration

See: https://github.com/golang/go/issues/43306
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
0d9d0eef4c fs: check the names and types of the options blocks are correct 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
2e653f8128 fs: make Flagger and FlaggerNP interfaces public so we can test flags elsewhere 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
e79273f9c9 fs: add Options registry and rework rc to use it 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
8e10fe71f7 fs: allow []string to work in Options 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
c6ab37a59f flags: factor AddFlagsFromOptions from cmd
This is in preparation for generalising the backend config
2024-07-15 11:09:53 +01:00
Nick Craig-Wood
671a15f65f fs: add Groups and FieldName to Option 2024-07-15 11:09:53 +01:00
Nick Craig-Wood
8d72698d5a fs: refactor fs.ConfigMap to take a prefix and Options rather than an fs.RegInfo
This is in preparation for generalising the backend config system
2024-07-15 11:09:53 +01:00
Nick Craig-Wood
6e853c82d8 sftp: ignore errors when closing the connection pool
There is no need to report errors when draining the connection pool -
they are useless at this point.

See: https://forum.rclone.org/t/rclone-fails-to-close-unused-tcp-connections-due-to-use-of-closed-network-connection/46735
2024-07-15 10:48:45 +01:00
Tomasz Melcer
27267547b9
sftp: use uint32 for mtime
The SFTP protocol (and the golang sftp package) internally uses uint32 unix
time for expressing mtime. Hence it is a waste of memory to store it as 24-byte
time.Time data structure in long-lived data structures. So despite that the
golang sftp package uses time.Time as external interface, we can re-encode the
value back to the original format and save memory.

Co-authored-by: Tomasz Melcer <tomasz@melcer.pl>
2024-07-09 10:23:11 +01:00
wiserain
cdcf0e5cb8
pikpak: optimize file move by removing unnecessary readMetaData() call
Previously, the code relied on calling `readMetaData()` after every file move operation.
This introduced an unnecessary API call and potentially impacted performance.

This change removes the redundant `readMetaData()` call, improving efficiency.
2024-07-08 18:16:00 +09:00
wiserain
6507770014
pikpak: fix error with copyto command
Fixes an issue where copied files could not be renamed when using the
`copyto` command. This occurred because the object ID was empty
before calling `readMetaData`. The fix preemptively calls `readMetaData`
to ensure an object ID is available before attempting the rename operation.
2024-07-08 10:37:42 +09:00
Paul Collins
bd5799c079 swift: add workarounds for bad listings in Ceph RGW
Ceph's Swift API emulation does not fully confirm to the API spec.
As a result, it sometimes returns fewer items in a container than
the requested limit, which according to the spec should means
that there are no more objects left in the container.  (Note that
python-swiftclient always fetches unless the current page is empty.)

This commit adds a pair of new Swift backend settings to handle this.

Set `fetch_until_empty_page` to true to always fetch another
page of the container listing unless there are no items left.

Alternatively, set `partial_page_fetch_threshold` to an integer
percentage.  In this case rclone will fetch a new page only when
the current page is within this percentage of the limit.

Swift API reference: https://docs.openstack.org/swift/latest/api/pagination.html

PR against ncw/swift with research and discussion: https://github.com/ncw/swift/pull/167

Fixes #7924
2024-06-28 11:14:26 +01:00
Russ Bubley
c834eb7dcb
sftp: fix docs on connections not to refer to concurrency 2024-06-28 10:42:52 +01:00
Nick Craig-Wood
754e53dbcc docs: remove warp as silver sponsor 2024-06-24 10:33:18 +01:00
Nick Craig-Wood
5511fa441a onedrive: fix nil pointer error when uploading small files
Before this fix when uploading a single part file, if the
o.fetchAndUpdateMetadata() call failed rclone would call
o.setMetaData() with a nil info which caused a crash.

This fixes the problem by returning the error from
o.fetchAndUpdateMetadata() explicitly.

See: https://forum.rclone.org/t/serve-webdav-is-crashing-fatal-error-sync-unlock-of-unlocked-mutex/46300
2024-06-24 09:30:59 +01:00
Nick Craig-Wood
4ed4483bbc vfs: fix fatal error: sync: unlock of unlocked mutex in panics
Before this change a panic could be overwritten with the message

    fatal error: sync: unlock of unlocked mutex

This was because we temporarily unlocked the mutex, but failed to lock
it again if there was a panic.

This is code is never the cause of an error but it masks the
underlying error by overwriting the panic cause.

See: https://forum.rclone.org/t/serve-webdav-is-crashing-fatal-error-sync-unlock-of-unlocked-mutex/46300
2024-06-24 09:30:59 +01:00
Nick Craig-Wood
0e85ba5080 Add Filipe Herculano to contributors 2024-06-24 09:30:59 +01:00
Nick Craig-Wood
e5095a7d7b Add Thearas to contributors 2024-06-24 09:30:59 +01:00
wiserain
300851e8bf
pikpak: implement custom hash to replace wrong sha1
This improves PikPak's file integrity verification by implementing a custom 
hash function named gcid and replacing the previously used SHA-1 hash.
2024-06-20 00:57:21 +09:00
wiserain
cbccad9491
pikpak: improves data consistency by ensuring async tasks complete
Similar to uploads implemented in commit ce5024bf33, 
this change ensures most asynchronous file operations (copy, move, delete, 
purge, and cleanup) complete before proceeding with subsequent actions. 
This reduces the risk of data inconsistencies and improves overall reliability.
2024-06-20 00:07:05 +09:00
dependabot[bot]
9f1a7cfa67 build(deps): bump docker/build-push-action from 5 to 6
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 5 to 6.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v5...v6)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-18 14:48:30 +01:00
Filipe Herculano
d84a4c9ac1
s3: fix incorrect region for Magalu provider 2024-06-15 17:40:28 +01:00
Thearas
1c9da8c96a docs: recommend no_check_bucket = true for Alibaba - fixes #7889
Change-Id: Ib6246e416ce67dddc3cb69350de69129a8826ce3
2024-06-15 17:39:05 +01:00
Nick Craig-Wood
af9c5fef93 docs: tidy .gitignore for docs 2024-06-15 13:08:20 +01:00
Nick Craig-Wood
7060777d1d docs: fix hugo warning: found no layout file for "html" for kind "term"
Hugo has been making this warning for a while

WARN found no layout file for "html" for kind "term": You should
create a template file which matches Hugo Layouts Lookup Rules for
this combination.

This turned out to be the addition of the `groups:` keyword to the
command frontmatter. Hugo is doing something with this keyword though
this isn't documented in the frontmatter documentation.

The fix was removing the `groups:` keyword from the frontmatter since
it was never used by hugo.
2024-06-15 12:59:49 +01:00
Nick Craig-Wood
0197e7f4e5 docs: remove slug and url from command pages since they are no longer needed 2024-06-15 12:37:43 +01:00
Nick Craig-Wood
c1c9e209f3 docs: fix hugo warning: found no layout file for "html" for kind "section"
Hugo has been making this warning for a while

WARN found no layout file for "html" for kind "section": You should
create a template file which matches Hugo Layouts Lookup Rules for
this combination.

It turned out to be
- the arrangement of the oracle object storage docs and sub page
- the fact that a section template was missing
2024-06-15 12:29:37 +01:00
Nick Craig-Wood
fd182af866 serve dlna: fix panic: invalid argument to Int63n
This updates the upstream github.com/anacrolix/dms to master to fix
the problem.

Fixes #7911
2024-06-15 10:58:57 +01:00
Nick Craig-Wood
4ea629446f Start v1.68.0-DEV development 2024-06-14 17:54:27 +01:00
Nick Craig-Wood
93e8a976ef Version v1.67.0 2024-06-14 16:04:51 +01:00
nielash
8470bdf810 s3: fix 405 error on HEAD for delete marker with versionId
When getting an object by specifying a versionId in the request, if the
specified version is a delete marker, it returns 405 (Method Not Allowed),
instead of 404 (Not Found) which would be returned without a versionId. See
https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeleteMarker.html

Before this change, we were only looking for 404 (and not 405) to determine
whether the object exists. This meant that in some circumstances (ex. when
Versioning is enabled for the bucket and we have a non-null X-Amz-Version-Id), we
deemed the object to exist when we should not have.

After this change, 405 (Method Not Allowed) is treated the same as 404 (Not
Found) for the purposes of headObject.

See https://forum.rclone.org/t/bisync-rename-failed-method-not-allowed/45723/13
2024-06-13 18:09:29 +01:00
Nick Craig-Wood
1aa3a37a28 gitannex: make tests run more quietly - use go test -v for more info
These tests were generating 1000s of lines of logs and making it
difficult to figure out what was failing in other tests.
2024-06-13 17:33:56 +01:00
albertony
ae887ad042 jottacloud: set metadata on server side copy and move - fixes #7900 2024-06-13 16:19:36 +01:00
Nick Craig-Wood
d279fea44a qingstor: disable integration tests as test account suspended
QingStor support have disabled the integration test account with this message

尊敬的用户您好:依据监管部门相关内容安全合规要求,QingStor即日起限制对
个人客户提供对象存储服务,您的对象存储服务将被系统置于禁用状态,如需继
续使用QingsStor对象存储服务,您可以通过工单或者拨打400热线申请开通,未
解封期间您的数据将不受影响,感谢您的谅解和支持。

Which google translate renders as

> Dear user: In accordance with the relevant content security
> compliance requirements of the regulatory authorities, QingStor will
> limit the provision of object storage services to individual
> customers from now on. Your object storage service will be disabled
> by the system. If you need to continue to use the QingsStor object
> storage service, you can apply for activation through a work order
> or by calling the 400 hotline. Your data will not be affected during
> the period of unblocking. Thank you for your understanding and
> support.
2024-06-13 12:50:35 +01:00
Nick Craig-Wood
282e34f2d5 operations: add operations.ReadFile to read the contents of a file into memory 2024-06-13 12:48:46 +01:00
Nick Craig-Wood
021f25a748 fs: make ConfigFs take an fs.Info which makes it more useful 2024-06-13 12:48:46 +01:00
Nick Craig-Wood
18e9d039ad touch: fix using -R on certain backends
On backends which return a valid object for "" with NewObject then
touch was going wrong as it thought it was passed an object.

This should not happen normally but s3 can be configured with
--s3-no-head where it is happy to believe that all objects exist.
2024-06-12 17:57:28 +01:00
Nick Craig-Wood
cbcfb90d9a serve s3: fix XML of error message
This updates the s3 libary to fix the XML of the error response

Fixes #7749
2024-06-12 17:53:57 +01:00
Nick Craig-Wood
caba22a585 fs/logger: make the tests deterministic
Previously this used `rclone test makefiles --seed 0` which sets a
random seed and every now and again we get this error

    Failed to open file "$WORK\\src\\moru": open $WORK\src\moru: is a directory

Because a file with the same name was created as a file in the src and
a dir in the dst.

This fixes it by using determinstic seeds each time.
2024-06-12 16:39:30 +01:00
Nick Craig-Wood
3fef8016b5 zoho: sleep for 60 seconds if rate limit error received 2024-06-12 16:34:30 +01:00
Nick Craig-Wood
edf6537c61 zoho: remove simple file names complication which is no longer needed 2024-06-12 16:34:27 +01:00
Nick Craig-Wood
00f0e9df9d zoho: retry reading info if size wasn't returned 2024-06-12 16:34:24 +01:00
Nick Craig-Wood
e6ab644350 zoho: fix throttling problem when uploading files
Before this change rclone checked to see if a file existed before
uploading it. It did this to avoid making duplicate files. This
involved listing the destination directory to see if the file existed
which was rate limited by Zoho.

However Zoho can't have duplicate files anyway so this fix just
removes that check and the PutUnchecked method which isn't needed.

See: https://forum.rclone.org/t/second-followup-on-the-older-topic-rclone-invokes-more-number-of-workdrive-s-files-listing-api-calls-which-exceeds-the-throttling-limit/45697
See: https://forum.rclone.org/t/followup-on-the-older-topic-rclone-invokes-more-number-of-workdrive-s-files-listing-api-calls-which-exceeds-the-throttling-limit/44794
2024-06-12 16:34:18 +01:00
Nick Craig-Wood
61c18e3b60 zoho: use cursor listing for improved performance
Cursor listing enables us to list up to 1,000 items per call
(previously it was 10) and uses one less transaction per call.

See: https://forum.rclone.org/t/second-followup-on-the-older-topic-rclone-invokes-more-number-of-workdrive-s-files-listing-api-calls-which-exceeds-the-throttling-limit/45697/4
2024-06-12 16:34:11 +01:00
Nick Craig-Wood
d068e0b1a9 operations: fix hashing problem in integration tests
Before this change backends which supported more than one hash (eg
pcloud) or backends which wrapped backends supporting more than one
hash (combine) would fail the TestMultithreadCopy and
TestMultithreadCopyAbort with an error like

    Failed to make new multi hasher: requested set 000001ff contains unknown hash types

This was caused by the tests limiting the globally available hashes to
the first hash supplied by the backend.

This was added in this commit

d5d28a7513 operations: fix overwrite of destination when multi-thread transfer fails

to overcome the tests taking >100s on the local backend because they
made every single hash that the local backend. It brought this time
down to 20s.

This commit fixes the problem and retains the CPU speedup by only
applying the fix from the original commit if the destination backend
is the local backend. This fixes the common case (testing on the local
backend). This does not fix the problem for a backend which wraps the
local backend (eg combine) but this is run only on the integration
test machine and not on all the CI.
2024-06-12 11:11:54 +01:00
Nick Craig-Wood
a341065b8d Add Bill Fraser to contributors 2024-06-12 11:11:54 +01:00
Nick Craig-Wood
0c29a1fe31 Add Florian Klink to contributors 2024-06-12 11:11:54 +01:00
Nick Craig-Wood
1a40300b5f Add Michał Dzienisiewicz to contributors 2024-06-12 11:11:54 +01:00