Version v1.67.0

This commit is contained in:
Nick Craig-Wood 2024-06-14 16:04:51 +01:00
parent 8470bdf810
commit 93e8a976ef
85 changed files with 46473 additions and 36551 deletions

24678
MANUAL.html generated

File diff suppressed because it is too large Load Diff

1458
MANUAL.md generated

File diff suppressed because it is too large Load Diff

23815
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@ -239,7 +239,7 @@ fetch_binaries:
rclone -P sync --exclude "/testbuilds/**" --delete-excluded $(BETA_UPLOAD) build/
serve: website
cd docs && hugo server -v -w --disableFastRender
cd docs && hugo server --logLevel info -w --disableFastRender
tag: retag doc
bin/make_changelog.py $(LAST_TAG) $(VERSION) > docs/content/changelog.md.new

View File

@ -3776,7 +3776,7 @@ file named "foo ' \.txt":
The result is a JSON array of matches, for example:
[
[
{
"createdTime": "2017-06-29T19:58:28.537Z",
"id": "0AxBe_CDEF4zkGHI4d0FjYko2QkD",
@ -3792,7 +3792,7 @@ The result is a JSON array of matches, for example:
"size": "311",
"webViewLink": "https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC"
}
]`,
]`,
}}
// Command the backend to run a named command

View File

@ -9,8 +9,7 @@
"description": "rclone - rsync for cloud storage: google drive, s3, gcs, azure, dropbox, box...",
"canonifyurls": false,
"disableKinds": [
"taxonomy",
"taxonomyTerm"
"taxonomy"
],
"ignoreFiles": [
"~$",

View File

@ -118,7 +118,7 @@ Here are the Advanced options specific to alias (Alias for an existing remote).
#### --alias-description
Description of the remote
Description of the remote.
Properties:

View File

@ -851,7 +851,7 @@ Properties:
#### --azureblob-description
Description of the remote
Description of the remote.
Properties:

View File

@ -689,7 +689,7 @@ Properties:
#### --azurefiles-description
Description of the remote
Description of the remote.
Properties:

View File

@ -647,7 +647,7 @@ Properties:
#### --b2-description
Description of the remote
Description of the remote.
Properties:

View File

@ -475,7 +475,7 @@ Properties:
#### --box-description
Description of the remote
Description of the remote.
Properties:

View File

@ -666,7 +666,7 @@ Properties:
#### --cache-description
Description of the remote
Description of the remote.
Properties:

View File

@ -5,6 +5,159 @@ description: "Rclone Changelog"
# Changelog
## v1.67.0 - 2024-06-14
[See commits](https://github.com/rclone/rclone/compare/v1.66.0...v1.67.0)
* New backends
* [uloz.to](/ulozto/) (iotmaestro)
* New S3 providers
* [Magalu Object Storage](/s3/#magalu) (Bruno Fernandes)
* New commands
* [gitannex](/commands/rclone_gitannex/): Enables git-annex to store and retrieve content from an rclone remote (Dan McArdle)
* New Features
* accounting: Add deleted files total size to status summary line (Kyle Reynolds)
* build
* Fix `CVE-2023-45288` by upgrading `golang.org/x/net` (Nick Craig-Wood)
* Fix `CVE-2024-35255` by upgrading `github.com/Azure/azure-sdk-for-go/sdk/azidentity` to 1.6.0 (dependabot)
* Convert source files with CRLF to LF (albertony)
* Update all dependencies (Nick Craig-Wood)
* doc updates (albertony, Alex Garel, Dave Nicolson, Dominik Joe Pantůček, Eric Wolf, Erisa A, Evan Harris, Evan McBeth, Gachoud Philippe, hidewrong, jakzoe, jumbi77, kapitainsky, Kyle Reynolds, Lewis Hook, Nick Craig-Wood, overallteach, pawsey-kbuckley, Pieter van Oostrum, psychopatt, racerole, static-moonlight, Warrentheo, yudrywet, yumeiyin )
* ncdu: Do not quit on Esc to aid usability (Katia Esposito)
* rcserver: Set `ModTime` for dirs and files served by `--rc-serve` (Nikita Shoshin)
* Bug Fixes
* bisync: Add integration tests against all backends and fix many many problems (nielash)
* config: Fix default value for `description` (Nick Craig-Wood)
* copy: Fix `nil` pointer dereference when corrupted on transfer with `nil` dst (nielash)
* fs
* Improve JSON Unmarshalling for `Duration` types (Kyle Reynolds)
* Close the CPU profile on exit (guangwu)
* Replace `/bin/bash` with `/usr/bin/env bash` (Florian Klink)
* oauthutil: Clear client secret if client ID is set (Michael Terry)
* operations
* Rework `rcat` so that it doesn't call the `--metadata-mapper` twice (Nick Craig-Wood)
* Ensure `SrcFsType` is set correctly when using `--metadata-mapper` (Nick Craig-Wood)
* Fix "optional feature not implemented" error with a crypted sftp bug (Nick Craig-Wood)
* Fix very long file names when using copy with `--partial` (Nick Craig-Wood)
* Fix retries downloading too much data with certain backends (Nick Craig-Wood)
* Fix move when dst is nil and fdst is case-insensitive (nielash)
* Fix lsjson `--encrypted` when using `--crypt-XXX` parameters (Nick Craig-Wood)
* Fix missing metadata for multipart transfers to local disk (Nick Craig-Wood)
* Fix incorrect modtime on some multipart transfers (Nick Craig-Wood)
* Fix hashing problem in integration tests (Nick Craig-Wood)
* rc
* Fix stats groups being ignored in `operations/check` (Nick Craig-Wood)
* Fix incorrect `Content-Type` in HTTP API (Kyle Reynolds)
* serve s3
* Fix `Last-Modified` header format (Butanediol)
* Fix in-memory metadata storing wrong modtime (nielash)
* Fix XML of error message (Nick Craig-Wood)
* serve webdav: Fix webdav with `--baseurl` under Windows (Nick Craig-Wood)
* serve dlna: Make `BrowseMetadata` more compliant (albertony)
* serve http: Added `Content-Length` header when HTML directory is served (Sunny)
* sync
* Don't sync directories if they haven't been modified (Nick Craig-Wood)
* Don't test reading metadata if we can't write it (Nick Craig-Wood)
* Fix case normalisation (problem on on s3) (Nick Craig-Wood)
* Fix management of empty directories to make it more accurate (Nick Craig-Wood)
* Fix creation of empty directories when `--create-empty-src-dirs=false` (Nick Craig-Wood)
* Fix directory modification times not being set (Nick Craig-Wood)
* Fix "failed to update directory timestamp or metadata: directory not found" (Nick Craig-Wood)
* Fix expecting SFTP to have MkdirMetadata method: optional feature not implemented (Nick Craig-Wood)
* test info: Improve cleanup of temp files (Kyle Reynolds)
* touch: Fix using `-R` on certain backends (Nick Craig-Wood)
* Mount
* Add `--direct-io` flag to force uncached access (Nick Craig-Wood)
* VFS
* Fix download loop when file size shrunk (Nick Craig-Wood)
* Fix renaming a directory (nielash)
* Local
* Add `--local-time-type` to use `mtime`/`atime`/`btime`/`ctime` as the time (Nick Craig-Wood)
* Allow `SeBackupPrivilege` and/or `SeRestorePrivilege` to work on Windows (Charles Hamilton)
* Azure Blob
* Fix encoding issue with dir path comparison (nielash)
* B2
* Add new [cleanup](/b2/#cleanup) and [cleanup-hidden](/b2/#cleanup-hidden) backend commands. (Pat Patterson)
* Update B2 URLs to new home (Nick Craig-Wood)
* Chunker
* Fix startup when root points to composite multi-chunk file without metadata (nielash)
* Fix case-insensitive comparison on local without metadata (nielash)
* Fix "finalizer already set" error (nielash)
* Drive
* Add [backend query](/drive/#query) command for general purpose querying of files (John-Paul Smith)
* Stop sending notification emails when setting permissions (Nick Craig-Wood)
* Fix server side copy with metadata from my drive to shared drive (Nick Craig-Wood)
* Set all metadata permissions and return error summary instead of stopping on the first error (Nick Craig-Wood)
* Make errors setting permissions into no retry errors (Nick Craig-Wood)
* Fix description being overwritten on server side moves (Nick Craig-Wood)
* Allow setting metadata to fail if `failok` flag is set (Nick Craig-Wood)
* Fix panic when using `--metadata-mapper` on large google doc files (Nick Craig-Wood)
* Dropbox
* Add `--dropbox-root-namespace` to override the root namespace (Bill Fraser)
* Google Cloud Storage
* Fix encoding issue with dir path comparison (nielash)
* Hdfs
* Fix f.String() not including subpath (nielash)
* Http
* Add `--http-no-escape` to not escape URL metacharacters in path names (Kyle Reynolds)
* Jottacloud
* Set metadata on server side copy and move (albertony)
* Linkbox
* Fix working with names longer than 8-25 Unicode chars. (Vitaly)
* Fix list paging and optimized synchronization. (gvitali)
* Mailru
* Attempt to fix throttling by increasing min sleep to 100ms (Nick Craig-Wood)
* Memory
* Fix dst mutating src after server-side copy (nielash)
* Fix deadlock in operations.Purge (nielash)
* Fix incorrect list entries when rooted at subdirectory (nielash)
* Onedrive
* Add `--onedrive-hard-delete` to permanently delete files (Nick Craig-Wood)
* Make server-side copy work in more scenarios (YukiUnHappy)
* Fix "unauthenticated: Unauthenticated" errors when downloading (Nick Craig-Wood)
* Fix `--metadata-mapper` being called twice if writing permissions (nielash)
* Set all metadata permissions and return error summary instead of stopping on the first error (nielash)
* Make errors setting permissions into no retry errors (Nick Craig-Wood)
* Skip writing permissions with 'owner' role (nielash)
* Fix references to deprecated permissions properties (nielash)
* Add support for group permissions (nielash)
* Allow setting permissions to fail if `failok` flag is set (Nick Craig-Wood)
* Pikpak
* Make getFile() usage more efficient to avoid the download limit (wiserain)
* Improve upload reliability and resolve potential file conflicts (wiserain)
* Implement configurable chunk size for multipart upload (wiserain)
* Protondrive
* Don't auth with an empty access token (Michał Dzienisiewicz)
* Qingstor
* Disable integration tests as test account suspended (Nick Craig-Wood)
* Quatrix
* Fix f.String() not including subpath (nielash)
* S3
* Add new AWS region `il-central-1` Tel Aviv (yoelvini)
* Update Scaleway's configuration options (Alexandre Lavigne)
* Ceph: fix quirks when creating buckets to fix trying to create an existing bucket (Thomas Schneider)
* Fix encoding issue with dir path comparison (nielash)
* Fix 405 error on HEAD for delete marker with versionId (nielash)
* Validate `--s3-copy-cutoff` size before copy (hoyho)
* SFTP
* Add `--sftp-connections` to limit the maximum number of connections (Tomasz Melcer)
* Storj
* Update `storj.io/uplink` to latest release (JT Olio)
* Update bio on request (Nick Craig-Wood)
* Swift
* Implement `--swift-use-segments-container` to allow >5G files on Blomp (Nick Craig-Wood)
* Union
* Fix deleting dirs when all remotes can't have empty dirs (Nick Craig-Wood)
* WebDAV
* Fix setting modification times erasing checksums on owncloud and nextcloud (nielash)
* owncloud: Add `--webdav-owncloud-exclude-mounts` which allows excluding mounted folders when listing remote resources (Thomas Müller)
* Zoho
* Fix throttling problem when uploading files (Nick Craig-Wood)
* Use cursor listing for improved performance (Nick Craig-Wood)
* Retry reading info after upload if size wasn't returned (Nick Craig-Wood)
* Remove simple file names complication which is no longer needed (Nick Craig-Wood)
* Sleep for 60 seconds if rate limit error received (Nick Craig-Wood)
## v1.66.0 - 2024-03-10
[See commits](https://github.com/rclone/rclone/compare/v1.65.0...v1.66.0)

View File

@ -479,7 +479,7 @@ Properties:
#### --chunker-description
Description of the remote
Description of the remote.
Properties:

View File

@ -160,7 +160,7 @@ Here are the Advanced options specific to combine (Combine several remotes into
#### --combine-description
Description of the remote
Description of the remote.
Properties:

View File

@ -257,6 +257,7 @@ rclone [flags]
--dropbox-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
--dropbox-impersonate string Impersonate this user when using a business account
--dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms)
--dropbox-root-namespace string Specify a different Dropbox namespace ID to use as the root for all paths
--dropbox-shared-files Instructs rclone to work on individual shared files
--dropbox-shared-folders Instructs rclone to work on shared folders
--dropbox-token string OAuth Access Token as a JSON blob
@ -384,6 +385,7 @@ rclone [flags]
--hidrive-upload-cutoff SizeSuffix Cutoff/Threshold for chunked uploads (default 96Mi)
--http-description string Description of the remote
--http-headers CommaSepList Set HTTP headers for all transactions
--http-no-escape Do not escape URL metacharacters in path names
--http-no-head Don't use HEAD requests
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of HTTP host to connect to
@ -432,7 +434,7 @@ rclone [flags]
--koofr-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--koofr-endpoint string The Koofr API endpoint to use
--koofr-mountid string Mount ID of the mount to use
--koofr-password string Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) (obscured)
--koofr-password string Your password for rclone generate one at https://app.koofr.net/app/admin/preferences/password (obscured)
--koofr-provider string Choose your storage provider
--koofr-setmtime Does the backend support setting modification time (default true)
--koofr-user string Your user name
@ -449,6 +451,7 @@ rclone [flags]
--local-no-set-modtime Disable setting modtime
--local-no-sparse Disable sparse files for multi-thread downloads
--local-nounc Disable UNC (long path names) conversion on Windows
--local-time-type mtime|atime|btime|ctime Set what kind of time is returned (default mtime)
--local-unicode-normalization Apply unicode NFC normalization to paths and filenames
--local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated)
--log-file string Log everything to this file
@ -530,6 +533,7 @@ rclone [flags]
--onedrive-drive-type string The type of the drive (personal | business | documentLibrary)
--onedrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings
--onedrive-hard-delete Permanently delete files on removal
--onedrive-hash-type string Specify the hash in use for the backend (default "auto")
--onedrive-link-password string Set the password for links created by the link command
--onedrive-link-scope string Set the scope of the links created by the link command (default "anonymous")
@ -587,6 +591,7 @@ rclone [flags]
--pcloud-token-url string Token server url
--pcloud-username string Your pcloud username
--pikpak-auth-url string Auth server URL
--pikpak-chunk-size SizeSuffix Chunk size for multipart uploads (default 5Mi)
--pikpak-client-id string OAuth Client Id
--pikpak-client-secret string OAuth Client Secret
--pikpak-description string Description of the remote
@ -597,6 +602,7 @@ rclone [flags]
--pikpak-token string OAuth Access Token as a JSON blob
--pikpak-token-url string Token server url
--pikpak-trashed-only Only show files that are in the trash
--pikpak-upload-concurrency int Concurrency for multipart uploads (default 5)
--pikpak-use-trash Send files to the trash instead of deleting permanently (default true)
--pikpak-user string Pikpak username
--premiumizeme-auth-url string Auth server URL
@ -665,6 +671,7 @@ rclone [flags]
--rc-realm string Realm for authentication
--rc-salt string Password hashing salt (default "dlPL2MqE")
--rc-serve Enable the serving of remote objects
--rc-serve-no-modtime Don't read the modification time (can speed things up)
--rc-server-read-timeout Duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--rc-template string User-specified template
@ -745,6 +752,7 @@ rclone [flags]
--sftp-chunk-size SizeSuffix Upload and download chunk size (default 32Ki)
--sftp-ciphers SpaceSepList Space separated list of ciphers to be used for session encryption, ordered by preference
--sftp-concurrency int The maximum number of outstanding requests for one file (default 64)
--sftp-connections int Maximum number of SFTP simultaneous connections, 0 for unlimited
--sftp-copy-is-hardlink Set to enable server side copies using hardlinks
--sftp-description string Description of the remote
--sftp-disable-concurrent-reads If set don't use concurrent reads
@ -840,7 +848,7 @@ rclone [flags]
--swift-auth string Authentication URL for server (OS_AUTH_URL)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
--swift-chunk-size SizeSuffix Above this size files will be chunked (default 5Gi)
--swift-description string Description of the remote
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8)
@ -856,6 +864,7 @@ rclone [flags]
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-use-segments-container Tristate Choose destination for large object segments (default unset)
--swift-user string User name to log in (OS_USERNAME)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID)
--syslog Use Syslog for logging
@ -867,6 +876,13 @@ rclone [flags]
--track-renames When synchronizing, track file renames and do a server-side move if possible
--track-renames-strategy string Strategies to use when synchronizing using track-renames hash|modtime|leaf (default "hash")
--transfers int Number of file transfers to run in parallel (default 4)
--ulozto-app-token string The application token identifying the app. An app API key can be either found in the API
--ulozto-description string Description of the remote
--ulozto-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--ulozto-list-page-size int The size of a single page for list commands. 1-500 (default 500)
--ulozto-password string The password for the user (obscured)
--ulozto-root-folder-slug string If set, rclone will use this folder as the root folder for all operations. For example,
--ulozto-username string The username of the principal to operate as
--union-action-policy string Policy to choose upstream on ACTION category (default "epall")
--union-cache-time int Cache time of usage and free space (in seconds) (default 120)
--union-create-policy string Policy to choose upstream on CREATE category (default "epmfs")
@ -883,7 +899,7 @@ rclone [flags]
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default "rclone/v1.66.0")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.67.0")
-v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
@ -892,6 +908,7 @@ rclone [flags]
--webdav-encoding string The encoding for the backend
--webdav-headers CommaSepList Set HTTP headers for all transactions
--webdav-nextcloud-chunk-size SizeSuffix Nextcloud upload chunk size (default 10Mi)
--webdav-owncloud-exclude-mounts Exclude ownCloud mounted storages
--webdav-owncloud-exclude-shares Exclude ownCloud shares
--webdav-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms)
--webdav-pass string Password (obscured)
@ -937,6 +954,7 @@ rclone [flags]
* [rclone delete](/commands/rclone_delete/) - Remove the files in path.
* [rclone deletefile](/commands/rclone_deletefile/) - Remove a single file from remote.
* [rclone gendocs](/commands/rclone_gendocs/) - Output markdown docs for rclone to the directory supplied.
* [rclone gitannex](/commands/rclone_gitannex/) - Speaks with git-annex over stdin/stdout.
* [rclone hashsum](/commands/rclone_hashsum/) - Produces a hashsum file for all the objects in the path.
* [rclone link](/commands/rclone_link/) - Generate public link to file/folder.
* [rclone listremotes](/commands/rclone_listremotes/) - List all the remotes in the config file and defined in environment variables.

View File

@ -14,21 +14,32 @@ Output bash completion script for rclone.
Generates a bash shell autocompletion script for rclone.
This writes to /etc/bash_completion.d/rclone by default so will
probably need to be run with sudo or as root, e.g.
By default, when run without any arguments,
sudo rclone genautocomplete bash
rclone genautocomplete bash
Logout and login again to use the autocompletion scripts, or source
them directly
the generated script will be written to
. /etc/bash_completion
/etc/bash_completion.d/rclone
If you supply a command line argument the script will be written
there.
and so rclone will probably need to be run as root, or with sudo.
If you supply a path to a file as the command line argument, then
the generated script will be written to that file, in which case
you should not need root privileges.
If output_file is "-", then the output will be written to stdout.
If you have installed the script into the default location, you
can logout and login again to use the autocompletion script.
Alternatively, you can source the script directly
. /path/to/my_bash_completion_scripts/rclone
and the autocompletion functionality will be added to your
current shell.
```
rclone completion bash [output_file] [flags]

View File

@ -0,0 +1,102 @@
---
title: "rclone gitannex"
description: "Speaks with git-annex over stdin/stdout."
slug: rclone_gitannex
url: /commands/rclone_gitannex/
versionIntroduced: v1.67.0
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/gitannex/ and as part of making a release run "make commanddocs"
---
# rclone gitannex
Speaks with git-annex over stdin/stdout.
## Synopsis
Rclone's `gitannex` subcommand enables [git-annex] to store and retrieve content
from an rclone remote. It is meant to be run by git-annex, not directly by
users.
[git-annex]: https://git-annex.branchable.com/
Installation on Linux
---------------------
1. Skip this step if your version of git-annex is [10.20240430] or newer.
Otherwise, you must create a symlink somewhere on your PATH with a particular
name. This symlink helps git-annex tell rclone it wants to run the "gitannex"
subcommand.
```sh
# Create the helper symlink in "$HOME/bin".
ln -s "$(realpath rclone)" "$HOME/bin/git-annex-remote-rclone-builtin"
# Verify the new symlink is on your PATH.
which git-annex-remote-rclone-builtin
```
[10.20240430]: https://git-annex.branchable.com/news/version_10.20240430/
2. Add a new remote to your git-annex repo. This new remote will connect
git-annex with the `rclone gitannex` subcommand.
Start by asking git-annex to describe the remote's available configuration
parameters.
```sh
# If you skipped step 1:
git annex initremote MyRemote type=rclone --whatelse
# If you created a symlink in step 1:
git annex initremote MyRemote type=external externaltype=rclone-builtin --whatelse
```
> **NOTE**: If you're porting an existing [git-annex-remote-rclone] remote to
> use `rclone gitannex`, you can probably reuse the configuration parameters
> verbatim without renaming them. Check parameter synonyms with `--whatelse`
> as shown above.
>
> [git-annex-remote-rclone]: https://github.com/git-annex-remote-rclone/git-annex-remote-rclone
The following example creates a new git-annex remote named "MyRemote" that
will use the rclone remote named "SomeRcloneRemote". That rclone remote must
be one configured in your rclone.conf file, which can be located with `rclone
config file`.
```sh
git annex initremote MyRemote \
type=external \
externaltype=rclone-builtin \
encryption=none \
rcloneremotename=SomeRcloneRemote \
rcloneprefix=git-annex-content \
rclonelayout=nodir
```
3. Before you trust this command with your precious data, be sure to **test the
remote**. This command is very new and has not been tested on many rclone
backends. Caveat emptor!
```sh
git annex testremote MyRemote
```
Happy annexing!
```
rclone gitannex [flags]
```
## Options
```
-h, --help help for gitannex
```
See the [global flags page](/flags/) for global options not listed here.
# SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.

View File

@ -37,7 +37,7 @@ The output is an array of Items, where each Item looks like this
"Tier" : "hot",
}
If `--hash` is not specified the Hashes property won't be emitted. The
If `--hash` is not specified, the Hashes property will be omitted. The
types of hash can be specified with the `--hash-type` parameter (which
may be repeated). If `--hash-type` is set then it implies `--hash`.
@ -49,7 +49,7 @@ If `--no-mimetype` is specified then MimeType will be blank. This can
speed things up on remotes where reading the MimeType takes an extra
request (e.g. s3, swift).
If `--encrypted` is not specified the Encrypted won't be emitted.
If `--encrypted` is not specified the Encrypted will be omitted.
If `--dirs-only` is not specified files in addition to directories are
returned

View File

@ -13,9 +13,8 @@ Mount the remote as file system on a mountpoint.
## Synopsis
rclone mount allows Linux, FreeBSD, macOS and Windows to
mount any of Rclone's cloud storage systems as a file system with
FUSE.
Rclone mount allows Linux, FreeBSD, macOS and Windows to
mount any of Rclone's cloud storage systems as a file system with FUSE.
First set up your remote using `rclone config`. Check it works with `rclone ls` etc.
@ -830,6 +829,7 @@ result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
```
rclone mount remote:path /path/to/mountpoint [flags]
```
@ -850,6 +850,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--devname string Set the device name - default is remote:path
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--direct-io Use Direct IO, disables caching of data
--file-perms FileMode File permissions (default 0666)
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)

View File

@ -46,7 +46,8 @@ press '?' to toggle the help on and off. The supported keys are:
^L refresh screen (fix screen corruption)
r recalculate file sizes
? to toggle help on and off
q/ESC/^c to quit
ESC to close the menu box
q/^c to quit
Listed files/directories may be prefixed by a one-character flag,
some of them combined with a description in brackets at end of line.

View File

@ -14,9 +14,8 @@ Mount the remote as file system on a mountpoint.
## Synopsis
rclone nfsmount allows Linux, FreeBSD, macOS and Windows to
mount any of Rclone's cloud storage systems as a file system with
FUSE.
Rclone nfsmount allows Linux, FreeBSD, macOS and Windows to
mount any of Rclone's cloud storage systems as a file system with FUSE.
First set up your remote using `rclone config`. Check it works with `rclone ls` etc.
@ -831,6 +830,7 @@ result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
```
rclone nfsmount remote:path /path/to/mountpoint [flags]
```
@ -852,6 +852,7 @@ rclone nfsmount remote:path /path/to/mountpoint [flags]
--devname string Set the device name - default is remote:path
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--direct-io Use Direct IO, disables caching of data
--file-perms FileMode File permissions (default 0666)
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)

View File

@ -13,7 +13,6 @@ Run rclone listening to remote control commands only.
## Synopsis
This runs rclone so that it only listens to remote control commands.
This is useful if you are controlling rclone via the rc API.
@ -67,7 +66,7 @@ of that with the CA certificate. `--krc-ey` should be the PEM encoded
private key and `--rc-client-ca` should be the PEM encoded client
certificate authority certificate.
--rc-min-tls-version is minimum TLS version that is acceptable. Valid
`--rc-min-tls-version` is minimum TLS version that is acceptable. Valid
values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
"tls1.0").
@ -135,6 +134,7 @@ Use `--rc-realm` to set the authentication realm.
Use `--rc-salt` to change the password hashing salt from the default.
```
rclone rcd <path to files to serve>* [flags]
```
@ -170,6 +170,7 @@ Flags to control the Remote Control API.
--rc-realm string Realm for authentication
--rc-salt string Password hashing salt (default "dlPL2MqE")
--rc-serve Enable the serving of remote objects
--rc-serve-no-modtime Don't read the modification time (can speed things up)
--rc-server-read-timeout Duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--rc-template string User-specified template

View File

@ -24,7 +24,6 @@ based on media formats or file extensions. Additionally, there is no
media transcoding support. This means that some players might show
files that they are not able to play back correctly.
## Server options
Use `--addr` to specify which IP address and port the server should
@ -391,6 +390,7 @@ result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
```
rclone serve dlna remote:path [flags]
```

View File

@ -406,6 +406,7 @@ result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
```
rclone serve docker [flags]
```
@ -427,6 +428,7 @@ rclone serve docker [flags]
--devname string Set the device name - default is remote:path
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--direct-io Use Direct IO, disables caching of data
--file-perms FileMode File permissions (default 0666)
--forget-state Skip restoring previous state
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)

View File

@ -13,7 +13,6 @@ Serve remote:path over FTP.
## Synopsis
Run a basic FTP server to serve a remote over FTP protocol.
This can be viewed with a FTP client or you can make a remote of
type FTP to read and write it.
@ -469,6 +468,7 @@ This can be used to build general purpose proxies to any kind of
backend that rclone supports.
```
rclone serve ftp remote:path [flags]
```

View File

@ -68,7 +68,7 @@ of that with the CA certificate. `--key` should be the PEM encoded
private key and `--client-ca` should be the PEM encoded client
certificate authority certificate.
--min-tls-version is minimum TLS version that is acceptable. Valid
`--min-tls-version` is minimum TLS version that is acceptable. Valid
values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
"tls1.0").
@ -570,6 +570,7 @@ This can be used to build general purpose proxies to any kind of
backend that rclone supports.
```
rclone serve http remote:path [flags]
```

View File

@ -399,6 +399,7 @@ result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
```
rclone serve nfs remote:path [flags]
```

View File

@ -137,7 +137,7 @@ of that with the CA certificate. `--key` should be the PEM encoded
private key and `--client-ca` should be the PEM encoded client
certificate authority certificate.
--min-tls-version is minimum TLS version that is acceptable. Valid
`--min-tls-version` is minimum TLS version that is acceptable. Valid
values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
"tls1.0").
@ -169,6 +169,7 @@ Use `--realm` to set the authentication realm.
Use `--salt` to change the password hashing salt from the default.
```
rclone serve restic remote:path [flags]
```

View File

@ -53,7 +53,27 @@ like this:
rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path
```
This will be compatible with an rclone remote which is defined like this:
For example, to use a simple folder in the filesystem, run the server
with a command like this:
```
rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY local:/path/to/folder
```
The `rclone.conf` for the server could look like this:
```
[local]
type = local
```
The `local` configuration is optional though. If you run the server with a
`remote:path` like `/path/to/folder` (without the `local:` prefix and without an
`rclone.conf` file), rclone will fall back to a default configuration, which
will be visible as a warning in the logs. But it will run nonetheless.
This will be compatible with an rclone (client) remote configuration which
is defined like this:
```
[serves3]
@ -173,7 +193,7 @@ of that with the CA certificate. `--key` should be the PEM encoded
private key and `--client-ca` should be the PEM encoded client
certificate authority certificate.
--min-tls-version is minimum TLS version that is acceptable. Valid
`--min-tls-version` is minimum TLS version that is acceptable. Valid
values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
"tls1.0").
@ -531,6 +551,7 @@ result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
```
rclone serve s3 remote:path [flags]
```

View File

@ -500,6 +500,7 @@ This can be used to build general purpose proxies to any kind of
backend that rclone supports.
```
rclone serve sftp remote:path [flags]
```

View File

@ -30,6 +30,7 @@ supported hash on the backend or you can use a named hash such as
to see the full list.
## Access WebDAV on Windows
WebDAV shared folder can be mapped as a drive on Windows, however the default settings prevent it.
Windows will fail to connect to the server using insecure Basic authentication.
It will not even display any login dialog. Windows requires SSL / HTTPS connection to be used with Basic.
@ -45,6 +46,7 @@ If required, increase the FileSizeLimitInBytes to a higher value.
Navigate to the Services interface, then restart the WebClient service.
## Access Office applications on WebDAV
Navigate to following registry HKEY_CURRENT_USER\Software\Microsoft\Office\[14.0/15.0/16.0]\Common\Internet
Create a new DWORD BasicAuthLevel with value 2.
0 - Basic authentication disabled
@ -53,7 +55,6 @@ Create a new DWORD BasicAuthLevel with value 2.
https://learn.microsoft.com/en-us/office/troubleshoot/powerpoint/office-opens-blank-from-sharepoint
## Server options
Use `--addr` to specify which IP address and port the server should
@ -97,7 +98,7 @@ of that with the CA certificate. `--key` should be the PEM encoded
private key and `--client-ca` should be the PEM encoded client
certificate authority certificate.
--min-tls-version is minimum TLS version that is acceptable. Valid
`--min-tls-version` is minimum TLS version that is acceptable. Valid
values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
"tls1.0").
@ -599,6 +600,7 @@ This can be used to build general purpose proxies to any kind of
backend that rclone supports.
```
rclone serve webdav remote:path [flags]
```

View File

@ -34,6 +34,7 @@ rclone test info [remote:path]+ [flags]
--check-normalization Check UTF-8 Normalization
--check-streaming Check uploads with indeterminate file size
-h, --help help for info
--keep-test-files Keep test files after execution
--upload-wait Duration Wait after writing a file (default 0s)
--write-json string Write results to file
```

View File

@ -160,7 +160,7 @@ Properties:
#### --compress-description
Description of the remote
Description of the remote.
Properties:

View File

@ -634,7 +634,7 @@ Properties:
#### --crypt-description
Description of the remote
Description of the remote.
Properties:

View File

@ -1291,6 +1291,8 @@ Properties:
- Read the value only
- "write"
- Write the value only
- "failok"
- If writing fails log errors only, don't fail the transfer
- "read,write"
- Read and Write the value.
@ -1319,6 +1321,8 @@ Properties:
- Read the value only
- "write"
- Write the value only
- "failok"
- If writing fails log errors only, don't fail the transfer
- "read,write"
- Read and Write the value.
@ -1354,6 +1358,8 @@ Properties:
- Read the value only
- "write"
- Write the value only
- "failok"
- If writing fails log errors only, don't fail the transfer
- "read,write"
- Read and Write the value.
@ -1390,7 +1396,7 @@ Properties:
#### --drive-description
Description of the remote
Description of the remote.
Properties:
@ -1420,7 +1426,7 @@ Here are the possible system metadata items for the drive backend.
| permissions | Permissions in a JSON dump of Google drive format. On shared drives these will only be present if they aren't inherited. Enable with --drive-metadata-permissions. | JSON | {} | N |
| starred | Whether the user has starred the file. | boolean | false | N |
| viewed-by-me | Whether the file has been viewed by this user. | boolean | true | **Y** |
| writers-can-share | Whether users with only writer permission can modify the file's permissions. Not populated for items in shared drives. | boolean | false | N |
| writers-can-share | Whether users with only writer permission can modify the file's permissions. Not populated and ignored when setting for items in shared drives. | boolean | false | N |
See the [metadata](/docs/#metadata) docs for more info.
@ -1623,6 +1629,51 @@ Dump the import formats for debug purposes
rclone backend importformats remote: [options] [<arguments>+]
### query
List files using Google Drive query language
rclone backend query remote: [options] [<arguments>+]
This command lists files based on a query
Usage:
rclone backend query drive: query
The query syntax is documented at [Google Drive Search query terms and
operators](https://developers.google.com/drive/api/guides/ref-search-terms).
For example:
rclone backend query drive: "'0ABc9DEFGHIJKLMNop0QRatUVW3X' in parents and name contains 'foo'"
If the query contains literal ' or \ characters, these need to be escaped with
\ characters. "'" becomes "\'" and "\" becomes "\\\", for example to match a
file named "foo ' \.txt":
rclone backend query drive: "name = 'foo \' \\\.txt'"
The result is a JSON array of matches, for example:
[
{
"createdTime": "2017-06-29T19:58:28.537Z",
"id": "0AxBe_CDEF4zkGHI4d0FjYko2QkD",
"md5Checksum": "68518d16be0c6fbfab918be61d658032",
"mimeType": "text/plain",
"modifiedTime": "2024-02-02T10:40:02.874Z",
"name": "foo ' \\.txt",
"parents": [
"0BxAe_BCDE4zkFGZpcWJGek0xbzC"
],
"resourceKey": "0-ABCDEFGHIXJQpIGqBJq3MC",
"sha1Checksum": "8f284fa768bfb4e45d076a579ab3905ab6bfa893",
"size": "311",
"webViewLink": "https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC"
}
]
{{< rem autogenerated options stop >}}
## Limitations

View File

@ -336,6 +336,9 @@ Note that we don't unmount the shared folder afterwards so the
--dropbox-shared-folders can be omitted after the first use of a particular
shared folder.
See also --dropbox-root-namespace for an alternative way to work with shared
folders.
Properties:
- Config: shared_folders
@ -367,6 +370,17 @@ Properties:
- Type: Encoding
- Default: Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot
#### --dropbox-root-namespace
Specify a different Dropbox namespace ID to use as the root for all paths.
Properties:
- Config: root_namespace
- Env Var: RCLONE_DROPBOX_ROOT_NAMESPACE
- Type: string
- Required: false
#### --dropbox-batch-mode
Upload file batching sync|async|off.
@ -455,7 +469,7 @@ Properties:
#### --dropbox-description
Description of the remote
Description of the remote.
Properties:

View File

@ -197,7 +197,7 @@ Properties:
#### --fichier-description
Description of the remote
Description of the remote.
Properties:

View File

@ -276,7 +276,7 @@ Properties:
#### --filefabric-description
Description of the remote
Description of the remote.
Properties:

View File

@ -114,7 +114,7 @@ General networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.66.0")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.67.0")
```
@ -500,6 +500,7 @@ Backend only flags. These can be set in the config file also.
--dropbox-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
--dropbox-impersonate string Impersonate this user when using a business account
--dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms)
--dropbox-root-namespace string Specify a different Dropbox namespace ID to use as the root for all paths
--dropbox-shared-files Instructs rclone to work on individual shared files
--dropbox-shared-folders Instructs rclone to work on shared folders
--dropbox-token string OAuth Access Token as a JSON blob
@ -605,6 +606,7 @@ Backend only flags. These can be set in the config file also.
--hidrive-upload-cutoff SizeSuffix Cutoff/Threshold for chunked uploads (default 96Mi)
--http-description string Description of the remote
--http-headers CommaSepList Set HTTP headers for all transactions
--http-no-escape Do not escape URL metacharacters in path names
--http-no-head Don't use HEAD requests
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of HTTP host to connect to
@ -640,7 +642,7 @@ Backend only flags. These can be set in the config file also.
--koofr-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--koofr-endpoint string The Koofr API endpoint to use
--koofr-mountid string Mount ID of the mount to use
--koofr-password string Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) (obscured)
--koofr-password string Your password for rclone generate one at https://app.koofr.net/app/admin/preferences/password (obscured)
--koofr-provider string Choose your storage provider
--koofr-setmtime Does the backend support setting modification time (default true)
--koofr-user string Your user name
@ -656,6 +658,7 @@ Backend only flags. These can be set in the config file also.
--local-no-set-modtime Disable setting modtime
--local-no-sparse Disable sparse files for multi-thread downloads
--local-nounc Disable UNC (long path names) conversion on Windows
--local-time-type mtime|atime|btime|ctime Set what kind of time is returned (default mtime)
--local-unicode-normalization Apply unicode NFC normalization to paths and filenames
--local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated)
--mailru-auth-url string Auth server URL
@ -698,6 +701,7 @@ Backend only flags. These can be set in the config file also.
--onedrive-drive-type string The type of the drive (personal | business | documentLibrary)
--onedrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings
--onedrive-hard-delete Permanently delete files on removal
--onedrive-hash-type string Specify the hash in use for the backend (default "auto")
--onedrive-link-password string Set the password for links created by the link command
--onedrive-link-scope string Set the scope of the links created by the link command (default "anonymous")
@ -752,6 +756,7 @@ Backend only flags. These can be set in the config file also.
--pcloud-token-url string Token server url
--pcloud-username string Your pcloud username
--pikpak-auth-url string Auth server URL
--pikpak-chunk-size SizeSuffix Chunk size for multipart uploads (default 5Mi)
--pikpak-client-id string OAuth Client Id
--pikpak-client-secret string OAuth Client Secret
--pikpak-description string Description of the remote
@ -762,6 +767,7 @@ Backend only flags. These can be set in the config file also.
--pikpak-token string OAuth Access Token as a JSON blob
--pikpak-token-url string Token server url
--pikpak-trashed-only Only show files that are in the trash
--pikpak-upload-concurrency int Concurrency for multipart uploads (default 5)
--pikpak-use-trash Send files to the trash instead of deleting permanently (default true)
--pikpak-user string Pikpak username
--premiumizeme-auth-url string Auth server URL
@ -875,6 +881,7 @@ Backend only flags. These can be set in the config file also.
--sftp-chunk-size SizeSuffix Upload and download chunk size (default 32Ki)
--sftp-ciphers SpaceSepList Space separated list of ciphers to be used for session encryption, ordered by preference
--sftp-concurrency int The maximum number of outstanding requests for one file (default 64)
--sftp-connections int Maximum number of SFTP simultaneous connections, 0 for unlimited
--sftp-copy-is-hardlink Set to enable server side copies using hardlinks
--sftp-description string Description of the remote
--sftp-disable-concurrent-reads If set don't use concurrent reads
@ -959,7 +966,7 @@ Backend only flags. These can be set in the config file also.
--swift-auth string Authentication URL for server (OS_AUTH_URL)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
--swift-chunk-size SizeSuffix Above this size files will be chunked (default 5Gi)
--swift-description string Description of the remote
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8)
@ -975,8 +982,16 @@ Backend only flags. These can be set in the config file also.
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-use-segments-container Tristate Choose destination for large object segments (default unset)
--swift-user string User name to log in (OS_USERNAME)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID)
--ulozto-app-token string The application token identifying the app. An app API key can be either found in the API
--ulozto-description string Description of the remote
--ulozto-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--ulozto-list-page-size int The size of a single page for list commands. 1-500 (default 500)
--ulozto-password string The password for the user (obscured)
--ulozto-root-folder-slug string If set, rclone will use this folder as the root folder for all operations. For example,
--ulozto-username string The username of the principal to operate as
--union-action-policy string Policy to choose upstream on ACTION category (default "epall")
--union-cache-time int Cache time of usage and free space (in seconds) (default 120)
--union-create-policy string Policy to choose upstream on CREATE category (default "epmfs")
@ -994,6 +1009,7 @@ Backend only flags. These can be set in the config file also.
--webdav-encoding string The encoding for the backend
--webdav-headers CommaSepList Set HTTP headers for all transactions
--webdav-nextcloud-chunk-size SizeSuffix Nextcloud upload chunk size (default 10Mi)
--webdav-owncloud-exclude-mounts Exclude ownCloud mounted storages
--webdav-owncloud-exclude-shares Exclude ownCloud shares
--webdav-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms)
--webdav-pass string Password (obscured)

View File

@ -455,7 +455,7 @@ Properties:
#### --ftp-description
Description of the remote
Description of the remote.
Properties:

View File

@ -701,7 +701,7 @@ Properties:
#### --gcs-description
Description of the remote
Description of the remote.
Properties:

View File

@ -463,7 +463,7 @@ Properties:
#### --gphotos-description
Description of the remote
Description of the remote.
Properties:

View File

@ -226,7 +226,7 @@ Properties:
#### --hasher-description
Description of the remote
Description of the remote.
Properties:

View File

@ -234,7 +234,7 @@ Properties:
#### --hdfs-description
Description of the remote
Description of the remote.
Properties:

View File

@ -420,7 +420,7 @@ Properties:
#### --hidrive-description
Description of the remote
Description of the remote.
Properties:

View File

@ -142,6 +142,17 @@ Properties:
- Type: string
- Required: true
#### --http-no-escape
Do not escape URL metacharacters in path names.
Properties:
- Config: no_escape
- Env Var: RCLONE_HTTP_NO_ESCAPE
- Type: bool
- Default: false
### Advanced options
Here are the Advanced options specific to http (HTTP).
@ -214,7 +225,7 @@ Properties:
#### --http-description
Description of the remote
Description of the remote.
Properties:

View File

@ -193,7 +193,7 @@ Properties:
#### --imagekit-description
Description of the remote
Description of the remote.
Properties:

View File

@ -265,7 +265,7 @@ Properties:
#### --internetarchive-description
Description of the remote
Description of the remote.
Properties:

View File

@ -449,7 +449,7 @@ Properties:
#### --jottacloud-description
Description of the remote
Description of the remote.
Properties:

View File

@ -159,7 +159,7 @@ Properties:
#### --koofr-password
Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password).
Your password for rclone generate one at https://app.koofr.net/app/admin/preferences/password.
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
@ -216,7 +216,7 @@ Properties:
#### --koofr-description
Description of the remote
Description of the remote.
Properties:

View File

@ -74,7 +74,7 @@ Here are the Advanced options specific to linkbox (Linkbox).
#### --linkbox-description
Description of the remote
Description of the remote.
Properties:

View File

@ -566,6 +566,44 @@ Properties:
- Type: bool
- Default: false
#### --local-time-type
Set what kind of time is returned.
Normally rclone does all operations on the mtime or Modification time.
If you set this flag then rclone will return the Modified time as whatever
you set here. So if you use "rclone lsl --local-time-type ctime" then
you will see ctimes in the listing.
If the OS doesn't support returning the time_type specified then rclone
will silently replace it with the modification time which all OSes support.
- mtime is supported by all OSes
- atime is supported on all OSes except: plan9, js
- btime is only supported on: Windows, macOS, freebsd, netbsd
- ctime is supported on all Oses except: Windows, plan9, js
Note that setting the time will still set the modified time so this is
only useful for reading.
Properties:
- Config: time_type
- Env Var: RCLONE_LOCAL_TIME_TYPE
- Type: mtime|atime|btime|ctime
- Default: mtime
- Examples:
- "mtime"
- The last modification time.
- "atime"
- The last access time.
- "btime"
- The creation time.
- "ctime"
- The last status change time.
#### --local-encoding
The encoding for the backend.
@ -581,7 +619,7 @@ Properties:
#### --local-description
Description of the remote
Description of the remote.
Properties:

View File

@ -414,7 +414,7 @@ Properties:
#### --mailru-description
Description of the remote
Description of the remote.
Properties:

View File

@ -284,7 +284,7 @@ Properties:
#### --mega-description
Description of the remote
Description of the remote.
Properties:

View File

@ -70,7 +70,7 @@ Here are the Advanced options specific to memory (In memory object storage syste
#### --memory-description
Description of the remote
Description of the remote.
Properties:

View File

@ -244,7 +244,7 @@ Properties:
#### --netstorage-description
Description of the remote
Description of the remote.
Properties:

View File

@ -458,9 +458,11 @@ Deprecated: use --server-side-across-configs instead.
Allow server-side operations (e.g. copy) to work across different onedrive configs.
This will only work if you are copying between two OneDrive *Personal* drives AND
the files to copy are already shared between them. In other cases, rclone will
fall back to normal copy (which will be slightly slower).
This will work if you are copying between two OneDrive *Personal* drives AND the files to
copy are already shared between them. Additionally, it should also function for a user who
has access permissions both between Onedrive for *business* and *SharePoint* under the *same
tenant*, and between *SharePoint* and another *SharePoint* under the *same tenant*. In other
cases, rclone will fall back to normal copy (which will be slightly slower).
Properties:
@ -503,6 +505,24 @@ Properties:
- Type: bool
- Default: false
#### --onedrive-hard-delete
Permanently delete files on removal.
Normally files will get sent to the recycle bin on deletion. Setting
this flag causes them to be permanently deleted. Use with care.
OneDrive personal accounts do not support the permanentDelete API,
it only applies to OneDrive for Business and SharePoint document libraries.
Properties:
- Config: hard_delete
- Env Var: RCLONE_ONEDRIVE_HARD_DELETE
- Type: bool
- Default: false
#### --onedrive-link-scope
Set the scope of the links created by the link command.
@ -567,7 +587,7 @@ all onedrive types. If an SHA1 hash is desired then set this option
accordingly.
From July 2023 QuickXorHash will be the only available hash for
both OneDrive for Business and OneDriver Personal.
both OneDrive for Business and OneDrive Personal.
This can be set to "none" to not use any hashes.
@ -680,6 +700,8 @@ Properties:
- Write the value only
- "read,write"
- Read and Write the value.
- "failok"
- If writing fails log errors only, don't fail the transfer
#### --onedrive-encoding
@ -696,7 +718,7 @@ Properties:
#### --onedrive-description
Description of the remote
Description of the remote.
Properties:
@ -713,11 +735,11 @@ differences between OneDrive Personal and Business (see table below for
details).
Permissions are also supported, if `--onedrive-metadata-permissions` is set. The
accepted values for `--onedrive-metadata-permissions` are `read`, `write`,
`read,write`, and `off` (the default). `write` supports adding new permissions,
accepted values for `--onedrive-metadata-permissions` are "`read`", "`write`",
"`read,write`", and "`off`" (the default). "`write`" supports adding new permissions,
updating the "role" of existing permissions, and removing permissions. Updating
and removing require the Permission ID to be known, so it is recommended to use
`read,write` instead of `write` if you wish to update/remove permissions.
"`read,write`" instead of "`write`" if you wish to update/remove permissions.
Permissions are read/written in JSON format using the same schema as the
[OneDrive API](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/resources/permission?view=odsp-graph-online),
@ -754,7 +776,7 @@ Example for OneDrive Business:
[
{
"id": "48d31887-5fad-4d73-a9f5-3c356e68a038",
"grantedToIdentitiesV2": [
"grantedToIdentities": [
{
"user": {
"displayName": "ryan@contoso.com"
@ -775,7 +797,7 @@ Example for OneDrive Business:
},
{
"id": "5D33DD65C6932946",
"grantedToV2": {
"grantedTo": {
"user": {
"displayName": "John Doe",
"id": "efee1b77-fb3b-4f65-99d6-274c11914d12"
@ -796,38 +818,19 @@ format. The [`--metadata-mapper`](https://rclone.org/docs/#metadata-mapper) tool
be very helpful for this.
When adding permissions, an email address can be provided in the `User.ID` or
`DisplayName` properties of `grantedTo` or `grantedToIdentities` (these are
deprecated on OneDrive Business -- instead, use `grantedToV2` and
`grantedToIdentitiesV2`, respectively). Alternatively, an ObjectID can be
provided in `User.ID`. At least one valid recipient must be provided in order to
add a permission for a user. Creating a Public Link is also supported, if
`Link.Scope` is set to `"anonymous"`.
`DisplayName` properties of `grantedTo` or `grantedToIdentities`. Alternatively,
an ObjectID can be provided in `User.ID`. At least one valid recipient must be
provided in order to add a permission for a user. Creating a Public Link is also
supported, if `Link.Scope` is set to `"anonymous"`.
Example request to add a "read" permission:
Example request to add a "read" permission with `--metadata-mapper`:
```json
[
{
"id": "",
"grantedTo": {
"user": {},
"application": {},
"device": {}
},
"grantedToIdentities": [
{
"user": {
"id": "ryan@contoso.com"
},
"application": {},
"device": {}
}
],
"roles": [
"read"
]
}
]
{
"Metadata": {
"permissions": "[{\"grantedToIdentities\":[{\"user\":{\"id\":\"ryan@contoso.com\"}}],\"roles\":[\"read\"]}]"
}
}
```
Note that adding a permission can fail if a conflicting permission already
@ -837,7 +840,8 @@ To update an existing permission, include both the Permission ID and the new
`roles` to be assigned. `roles` is the only property that can be changed.
To remove permissions, pass in a blob containing only the permissions you wish
to keep (which can be empty, to remove all.)
to keep (which can be empty, to remove all.) Note that the `owner` role will be
ignored, as it cannot be removed.
Note that both reading and writing permissions requires extra API calls, so if
you don't need to read or write permissions it is recommended to omit

View File

@ -164,7 +164,7 @@ Properties:
#### --opendrive-description
Description of the remote
Description of the remote.
Properties:

View File

@ -709,7 +709,7 @@ Properties:
#### --oos-description
Description of the remote
Description of the remote.
Properties:

View File

@ -290,7 +290,7 @@ Properties:
#### --pcloud-description
Description of the remote
Description of the remote.
Properties:

View File

@ -227,6 +227,54 @@ Properties:
- Type: SizeSuffix
- Default: 10Mi
#### --pikpak-chunk-size
Chunk size for multipart uploads.
Large files will be uploaded in chunks of this size.
Note that this is stored in memory and there may be up to
"--transfers" * "--pikpak-upload-concurrency" chunks stored at once
in memory.
If you are transferring large files over high-speed links and you have
enough memory, then increasing this will speed up the transfers.
Rclone will automatically increase the chunk size when uploading a
large file of known size to stay below the 10,000 chunks limit.
Increasing the chunk size decreases the accuracy of the progress
statistics displayed with "-P" flag.
Properties:
- Config: chunk_size
- Env Var: RCLONE_PIKPAK_CHUNK_SIZE
- Type: SizeSuffix
- Default: 5Mi
#### --pikpak-upload-concurrency
Concurrency for multipart uploads.
This is the number of chunks of the same file that are uploaded
concurrently for multipart uploads.
Note that chunks are stored in memory and there may be up to
"--transfers" * "--pikpak-upload-concurrency" chunks stored at once
in memory.
If you are uploading small numbers of large files over high-speed links
and these uploads do not fully utilize your bandwidth, then increasing
this may help to speed up the transfers.
Properties:
- Config: upload_concurrency
- Env Var: RCLONE_PIKPAK_UPLOAD_CONCURRENCY
- Type: int
- Default: 5
#### --pikpak-encoding
The encoding for the backend.
@ -242,7 +290,7 @@ Properties:
#### --pikpak-description
Description of the remote
Description of the remote.
Properties:

View File

@ -204,7 +204,7 @@ Properties:
#### --premiumizeme-description
Description of the remote
Description of the remote.
Properties:

View File

@ -333,7 +333,7 @@ Properties:
#### --protondrive-description
Description of the remote
Description of the remote.
Properties:

View File

@ -201,7 +201,7 @@ Properties:
#### --putio-description
Description of the remote
Description of the remote.
Properties:

View File

@ -312,7 +312,7 @@ Properties:
#### --qingstor-description
Description of the remote
Description of the remote.
Properties:

View File

@ -249,7 +249,7 @@ Properties:
#### --quatrix-description
Description of the remote
Description of the remote.
Properties:

View File

@ -672,7 +672,7 @@ A simple solution is to set the `--s3-upload-cutoff 0` and force all the files t
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/s3/s3.go then run make backenddocs" >}}
### Standard options
Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
#### --s3-provider
@ -719,6 +719,8 @@ Properties:
- Liara Object Storage
- "Linode"
- Linode Object Storage
- "Magalu"
- Magalu Object Storage
- "Minio"
- Minio Object Storage
- "Netease"
@ -862,6 +864,9 @@ Properties:
- "sa-east-1"
- South America (Sao Paulo) Region.
- Needs location constraint sa-east-1.
- "il-central-1"
- Israel (Tel Aviv) Region.
- Needs location constraint il-central-1.
- "me-south-1"
- Middle East (Bahrain) Region.
- Needs location constraint me-south-1.
@ -947,6 +952,8 @@ Properties:
- Asia Pacific (Hong Kong) Region
- "sa-east-1"
- South America (Sao Paulo) Region
- "il-central-1"
- Israel (Tel Aviv) Region
- "me-south-1"
- Middle East (Bahrain) Region
- "af-south-1"
@ -1093,7 +1100,7 @@ Properties:
### Advanced options
Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
#### --s3-bucket-acl
@ -1907,7 +1914,7 @@ Properties:
#### --s3-description
Description of the remote
Description of the remote.
Properties:

View File

@ -391,7 +391,7 @@ Properties:
#### --seafile-description
Description of the remote
Description of the remote.
Properties:

View File

@ -895,6 +895,31 @@ Properties:
- Type: int
- Default: 64
#### --sftp-connections
Maximum number of SFTP simultaneous connections, 0 for unlimited.
Note that setting this is very likely to cause deadlocks so it should
be used with care.
If you are doing a sync or copy then make sure concurrency is one more
than the sum of `--transfers` and `--checkers`.
If you use `--check-first` then it just needs to be one more than the
maximum of `--checkers` and `--transfers`.
So for `concurrency 3` you'd use `--checkers 2 --transfers 2
--check-first` or `--checkers 1 --transfers 1`.
Properties:
- Config: connections
- Env Var: RCLONE_SFTP_CONNECTIONS
- Type: int
- Default: 0
#### --sftp-set-env
Environment variables to pass to sftp and commands
@ -1046,7 +1071,7 @@ Properties:
#### --sftp-description
Description of the remote
Description of the remote.
Properties:

View File

@ -305,7 +305,7 @@ Properties:
#### --sharefile-description
Description of the remote
Description of the remote.
Properties:

View File

@ -196,7 +196,7 @@ Properties:
#### --sia-description
Description of the remote
Description of the remote.
Properties:

View File

@ -250,7 +250,7 @@ Properties:
#### --smb-description
Description of the remote
Description of the remote.
Properties:

View File

@ -305,7 +305,7 @@ Here are the Advanced options specific to storj (Storj Decentralized Cloud Stora
#### --storj-description
Description of the remote
Description of the remote.
Properties:

View File

@ -274,7 +274,7 @@ Properties:
#### --sugarsync-description
Description of the remote
Description of the remote.
Properties:

View File

@ -510,10 +510,15 @@ Properties:
#### --swift-chunk-size
Above this size files will be chunked into a _segments container.
Above this size files will be chunked.
Above this size files will be chunked into a a `_segments` container
or a `.file-segments` directory. (See the `use_segments_container` option
for more info). Default for this is 5 GiB which is its maximum value, which
means only files above this size will be chunked.
Rclone uploads chunked files as dynamic large objects (DLO).
Above this size files will be chunked into a _segments container. The
default for this is 5 GiB which is its maximum value.
Properties:
@ -526,14 +531,16 @@ Properties:
Don't chunk files during streaming upload.
When doing streaming uploads (e.g. using rcat or mount) setting this
flag will cause the swift backend to not upload chunked files.
When doing streaming uploads (e.g. using `rcat` or `mount` with
`--vfs-cache-mode off`) setting this flag will cause the swift backend
to not upload chunked files.
This will limit the maximum upload size to 5 GiB. However non chunked
files are easier to deal with and have an MD5SUM.
This will limit the maximum streamed upload size to 5 GiB. This is
useful because non chunked files are easier to deal with and have an
MD5SUM.
Rclone will still chunk files bigger than chunk_size when doing normal
copy operations.
Rclone will still chunk files bigger than `chunk_size` when doing
normal copy operations.
Properties:
@ -547,11 +554,12 @@ Properties:
Disable support for static and dynamic large objects
Swift cannot transparently store files bigger than 5 GiB. There are
two schemes for doing that, static or dynamic large objects, and the
API does not allow rclone to determine whether a file is a static or
dynamic large object without doing a HEAD on the object. Since these
need to be treated differently, this means rclone has to issue HEAD
requests for objects for example when reading checksums.
two schemes for chunking large files, static large objects (SLO) or
dynamic large objects (DLO), and the API does not allow rclone to
determine whether a file is a static or dynamic large object without
doing a HEAD on the object. Since these need to be treated
differently, this means rclone has to issue HEAD requests for objects
for example when reading checksums.
When `no_large_objects` is set, rclone will assume that there are no
static or dynamic large objects stored. This means it can stop doing
@ -562,7 +570,7 @@ Setting this option implies `no_chunk` and also that no files will be
uploaded in chunks, so files bigger than 5 GiB will just fail on
upload.
If you set this option and there *are* static or dynamic large objects,
If you set this option and there **are** static or dynamic large objects,
then this will give incorrect hashes for them. Downloads will succeed,
but other operations such as Remove and Copy will fail.
@ -574,6 +582,40 @@ Properties:
- Type: bool
- Default: false
#### --swift-use-segments-container
Choose destination for large object segments
Swift cannot transparently store files bigger than 5 GiB and rclone
will chunk files larger than `chunk_size` (default 5 GiB) in order to
upload them.
If this value is `true` the chunks will be stored in an additional
container named the same as the destination container but with
`_segments` appended. This means that there won't be any duplicated
data in the original container but having another container may not be
acceptable.
If this value is `false` the chunks will be stored in a
`.file-segments` directory in the root of the container. This
directory will be omitted when listing the container. Some
providers (eg Blomp) require this mode as creating additional
containers isn't allowed. If it is desired to see the `.file-segments`
directory in the root then this flag must be set to `true`.
If this value is `unset` (the default), then rclone will choose the value
to use. It will be `false` unless rclone detects any `auth_url`s that
it knows need it to be `true`. In this case you'll see a message in
the DEBUG log.
Properties:
- Config: use_segments_container
- Env Var: RCLONE_SWIFT_USE_SEGMENTS_CONTAINER
- Type: Tristate
- Default: unset
#### --swift-encoding
The encoding for the backend.
@ -589,7 +631,7 @@ Properties:
#### --swift-description
Description of the remote
Description of the remote.
Properties:

View File

@ -4,7 +4,7 @@ description: "Rclone docs for Uloz.to"
versionIntroduced: "v1.66"
---
# {{< icon "fa fa-box-archive" >}} Uloz.to
# {{< icon "fas fa-angle-double-down" >}} Uloz.to
Paths are specified as `remote:path`
@ -239,7 +239,7 @@ Properties:
#### --ulozto-description
Description of the remote
Description of the remote.
Properties:

View File

@ -289,7 +289,7 @@ Properties:
#### --union-description
Description of the remote
Description of the remote.
Properties:

View File

@ -148,7 +148,7 @@ Properties:
#### --uptobox-description
Description of the remote
Description of the remote.
Properties:

View File

@ -283,9 +283,20 @@ Properties:
- Type: bool
- Default: false
#### --webdav-owncloud-exclude-mounts
Exclude ownCloud mounted storages
Properties:
- Config: owncloud_exclude_mounts
- Env Var: RCLONE_WEBDAV_OWNCLOUD_EXCLUDE_MOUNTS
- Type: bool
- Default: false
#### --webdav-description
Description of the remote
Description of the remote.
Properties:

View File

@ -211,7 +211,7 @@ Properties:
#### --yandex-description
Description of the remote
Description of the remote.
Properties:

View File

@ -239,7 +239,7 @@ Properties:
#### --zoho-description
Description of the remote
Description of the remote.
Properties:

View File

@ -100,7 +100,7 @@
<a class="dropdown-item" href="/smb/"><i class="fa fa-server fa-fw"></i> SMB / CIFS</a>
<a class="dropdown-item" href="/storj/"><i class="fas fa-dove fa-fw"></i> Storj</a>
<a class="dropdown-item" href="/sugarsync/"><i class="fas fa-dove fa-fw"></i> SugarSync</a>
<a class="dropdown-item" href="/uloz.to/"><i class="fa fa-box-archive fa-fw"></i> Uloz.to</a>
<a class="dropdown-item" href="/ulozto/"><i class="fas fa-angle-double-down"></i> Uloz.to</a>
<a class="dropdown-item" href="/uptobox/"><i class="fa fa-archive fa-fw"></i> Uptobox</a>
<a class="dropdown-item" href="/union/"><i class="fa fa-link fa-fw"></i> Union (merge backends)</a>
<a class="dropdown-item" href="/webdav/"><i class="fa fa-server fa-fw"></i> WebDAV</a>

4
go.sum
View File

@ -450,10 +450,6 @@ github.com/quic-go/qtls-go1-20 v0.4.1 h1:D33340mCNDAIKBqXuAvexTNMUByrYmFYVfKfDN5
github.com/quic-go/quic-go v0.40.1 h1:X3AGzUNFs0jVuO3esAGnTfvdgvL4fq655WaOi1snv1Q=
github.com/rasky/go-xdr v0.0.0-20170124162913-1a41d1a06c93 h1:UVArwN/wkKjMVhh2EQGC0tEc1+FqiLlvYXY5mQ2f8Wg=
github.com/rasky/go-xdr v0.0.0-20170124162913-1a41d1a06c93/go.mod h1:Nfe4efndBz4TibWycNE+lqyJZiMX4ycx+QKV8Ta0f/o=
github.com/rclone/gofakes3 v0.0.3-0.20240413171058-b7a9fdb78ddb h1:HJJ7XgRBfXew3EosVk45aGPJRY5wSTSpmAJqz8Kiw0w=
github.com/rclone/gofakes3 v0.0.3-0.20240413171058-b7a9fdb78ddb/go.mod h1:L0VIBE0mT6ArN/5dfHsJm3UjqCpi5B/cdN+qWDNh7ko=
github.com/rclone/gofakes3 v0.0.3-0.20240414171457-6975bf40a0a8 h1:hJNS/Xf4iQD/Et/pOVuYtJE9zbEZ01MeQM10Xa3xf+k=
github.com/rclone/gofakes3 v0.0.3-0.20240414171457-6975bf40a0a8/go.mod h1:L0VIBE0mT6ArN/5dfHsJm3UjqCpi5B/cdN+qWDNh7ko=
github.com/rclone/gofakes3 v0.0.3-0.20240422160309-90e8e825c0c3 h1:PyMsWM61oBPU3ajQfCImfxPHyESb5wC6NaZ6b2GAXTo=
github.com/rclone/gofakes3 v0.0.3-0.20240422160309-90e8e825c0c3/go.mod h1:L0VIBE0mT6ArN/5dfHsJm3UjqCpi5B/cdN+qWDNh7ko=
github.com/relvacode/iso8601 v1.3.0 h1:HguUjsGpIMh/zsTczGN3DVJFxTU/GX+MMmzcKoMO7ko=

32093
rclone.1 generated

File diff suppressed because it is too large Load Diff