Version v1.65.0

This commit is contained in:
Nick Craig-Wood 2023-11-26 15:59:12 +00:00
parent 74d5477fad
commit 82b963e372
72 changed files with 16446 additions and 11778 deletions

5974
MANUAL.html generated

File diff suppressed because it is too large Load Diff

5389
MANUAL.md generated

File diff suppressed because it is too large Load Diff

5412
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@ -54,7 +54,7 @@ docs = [
"internetarchive.md", "internetarchive.md",
"jottacloud.md", "jottacloud.md",
"koofr.md", "koofr.md",
"linkbox.md" "linkbox.md",
"mailru.md", "mailru.md",
"mega.md", "mega.md",
"memory.md", "memory.md",

View File

@ -303,7 +303,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_ACD_ENCODING - Env Var: RCLONE_ACD_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,InvalidUtf8,Dot - Default: Slash,InvalidUtf8,Dot
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}

View File

@ -765,7 +765,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_AZUREBLOB_ENCODING - Env Var: RCLONE_AZUREBLOB_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8 - Default: Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8
#### --azureblob-public-access #### --azureblob-public-access

View File

@ -508,7 +508,7 @@ Properties:
- Config: upload_concurrency - Config: upload_concurrency
- Env Var: RCLONE_B2_UPLOAD_CONCURRENCY - Env Var: RCLONE_B2_UPLOAD_CONCURRENCY
- Type: int - Type: int
- Default: 16 - Default: 4
#### --b2-disable-checksum #### --b2-disable-checksum
@ -588,6 +588,37 @@ Properties:
- Type: bool - Type: bool
- Default: false - Default: false
#### --b2-lifecycle
Set the number of days deleted files should be kept when creating a bucket.
On bucket creation, this parameter is used to create a lifecycle rule
for the entire bucket.
If lifecycle is 0 (the default) it does not create a lifecycle rule so
the default B2 behaviour applies. This is to create versions of files
on delete and overwrite and to keep them indefinitely.
If lifecycle is >0 then it creates a single rule setting the number of
days before a file that is deleted or overwritten is deleted
permanently. This is known as daysFromHidingToDeleting in the b2 docs.
The minimum value for this parameter is 1 day.
You can also enable hard_delete in the config also which will mean
deletions won't cause versions but overwrites will still cause
versions to be made.
See: [rclone backend lifecycle](#lifecycle) for setting lifecycles after bucket creation.
Properties:
- Config: lifecycle
- Env Var: RCLONE_B2_LIFECYCLE
- Type: int
- Default: 0
#### --b2-encoding #### --b2-encoding
The encoding for the backend. The encoding for the backend.
@ -598,9 +629,76 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_B2_ENCODING - Env Var: RCLONE_B2_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
## Backend commands
Here are the commands specific to the b2 backend.
Run them with
rclone backend COMMAND remote:
The help below will explain what arguments each command takes.
See the [backend](/commands/rclone_backend/) command for more
info on how to pass options and arguments.
These can be run on a running backend using the rc command
[backend/command](/rc/#backend-command).
### lifecycle
Read or set the lifecycle for a bucket
rclone backend lifecycle remote: [options] [<arguments>+]
This command can be used to read or set the lifecycle for a bucket.
Usage Examples:
To show the current lifecycle rules:
rclone backend lifecycle b2:bucket
This will dump something like this showing the lifecycle rules.
[
{
"daysFromHidingToDeleting": 1,
"daysFromUploadingToHiding": null,
"fileNamePrefix": ""
}
]
If there are no lifecycle rules (the default) then it will just return [].
To reset the current lifecycle rules:
rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=30
rclone backend lifecycle b2:bucket -o daysFromUploadingToHiding=5 -o daysFromHidingToDeleting=1
This will run and then print the new lifecycle rules as above.
Rclone only lets you set lifecycles for the whole bucket with the
fileNamePrefix = "".
You can't disable versioning with B2. The best you can do is to set
the daysFromHidingToDeleting to 1 day. You can enable hard_delete in
the config also which will mean deletions won't cause versions but
overwrites will still cause versions to be made.
rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=1
See: https://www.backblaze.com/docs/cloud-storage-lifecycle-rules
Options:
- "daysFromHidingToDeleting": After a file has been hidden for this many days it is deleted. 0 is off.
- "daysFromUploadingToHiding": This many days after uploading a file is hidden
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}
## Limitations ## Limitations

View File

@ -470,7 +470,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_BOX_ENCODING - Env Var: RCLONE_BOX_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot - Default: Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}

View File

@ -5,6 +5,108 @@ description: "Rclone Changelog"
# Changelog # Changelog
## v1.65.0 - 2023-11-26
[See commits](https://github.com/rclone/rclone/compare/v1.64.0...v1.65.0)
* New backends
* Azure Files (karan, moongdal, Nick Craig-Wood)
* ImageKit (Abhinav Dhiman)
* Linkbox (viktor, Nick Craig-Wood)
* New commands
* `serve s3`: Let rclone act as an S3 compatible server (Mikubill, Artur Neumann, Saw-jan, Nick Craig-Wood)
* `nfsmount`: mount command to provide mount mechanism on macOS without FUSE (Saleh Dindar)
* `serve nfs`: to serve a remote for use by `nfsmount` (Saleh Dindar)
* New Features
* install.sh: Clean up temp files in install script (Jacob Hands)
* build
* Update all dependencies (Nick Craig-Wood)
* Refactor version info and icon resource handling on windows (albertony)
* doc updates (albertony, alfish2000, asdffdsazqqq, Dimitri Papadopoulos, Herby Gillot, Joda Stößer, Manoj Ghosh, Nick Craig-Wood)
* Implement `--metadata-mapper` to transform metatadata with a user supplied program (Nick Craig-Wood)
* Add `ChunkWriterDoesntSeek` feature flag and set it for b2 (Nick Craig-Wood)
* lib/http: Export basic go string functions for use in `--template` (Gabriel Espinoza)
* makefile: Use POSIX compatible install arguments (Mina Galić)
* operations
* Use less memory when doing multithread uploads (Nick Craig-Wood)
* Implement `--partial-suffix` to control extension of temporary file names (Volodymyr)
* rc
* Add `operations/check` to the rc API (Nick Craig-Wood)
* Always report an error as JSON (Nick Craig-Wood)
* Set `Last-Modified` header for files served by `--rc-serve` (Nikita Shoshin)
* size: Dont show duplicate object count when less than 1k (albertony)
* Bug Fixes
* fshttp: Fix `--contimeout` being ignored (你知道未来吗)
* march: Fix excessive parallelism when using `--no-traverse` (Nick Craig-Wood)
* ncdu: Fix crash when re-entering changed directory after rescan (Nick Craig-Wood)
* operations
* Fix overwrite of destination when multi-thread transfer fails (Nick Craig-Wood)
* Fix invalid UTF-8 when truncating file names when not using `--inplace` (Nick Craig-Wood)
* serve dnla: Fix crash on graceful exit (wuxingzhong)
* Mount
* Disable mount for freebsd and alias cmount as mount on that platform (Nick Craig-Wood)
* VFS
* Add `--vfs-refresh` flag to read all the directories on start (Beyond Meat)
* Implement Name() method in WriteFileHandle and ReadFileHandle (Saleh Dindar)
* Add go-billy dependency and make sure vfs.Handle implements billy.File (Saleh Dindar)
* Error out early if can't upload 0 length file (Nick Craig-Wood)
* Local
* Fix copying from Windows Volume Shadows (Nick Craig-Wood)
* Azure Blob
* Add support for cold tier (Ivan Yanitra)
* B2
* Implement "rclone backend lifecycle" to read and set bucket lifecycles (Nick Craig-Wood)
* Implement `--b2-lifecycle` to control lifecycle when creating buckets (Nick Craig-Wood)
* Fix listing all buckets when not needed (Nick Craig-Wood)
* Fix multi-thread upload with copyto going to wrong name (Nick Craig-Wood)
* Fix server side chunked copy when file size was exactly `--b2-copy-cutoff` (Nick Craig-Wood)
* Fix streaming chunked files an exact multiple of chunk size (Nick Craig-Wood)
* Box
* Filter more EventIDs when polling (David Sze)
* Add more logging for polling (David Sze)
* Fix performance problem reading metadata for single files (Nick Craig-Wood)
* Drive
* Add read/write metadata support (Nick Craig-Wood)
* Add support for SHA-1 and SHA-256 checksums (rinsuki)
* Add `--drive-show-all-gdocs` to allow unexportable gdocs to be server side copied (Nick Craig-Wood)
* Add a note that `--drive-scope` accepts comma-separated list of scopes (Keigo Imai)
* Fix error updating created time metadata on existing object (Nick Craig-Wood)
* Fix integration tests by enabling metadata support from the context (Nick Craig-Wood)
* Dropbox
* Factor batcher into lib/batcher (Nick Craig-Wood)
* Fix missing encoding for rclone purge (Nick Craig-Wood)
* Google Cloud Storage
* Fix 400 Bad request errors when using multi-thread copy (Nick Craig-Wood)
* Googlephotos
* Implement batcher for uploads (Nick Craig-Wood)
* Hdfs
* Added support for list of namenodes in hdfs remote config (Tayo-pasedaRJ)
* HTTP
* Implement set backend command to update running backend (Nick Craig-Wood)
* Enable methods used with WebDAV (Alen Šiljak)
* Jottacloud
* Add support for reading and writing metadata (albertony)
* Onedrive
* Implement ListR method which gives `--fast-list` support (Nick Craig-Wood)
* This must be enabled with the `--onedrive-delta` flag
* Quatrix
* Add partial upload support (Oksana Zhykina)
* Overwrite files on conflict during server-side move (Oksana Zhykina)
* S3
* Add Linode provider (Nick Craig-Wood)
* Add docs on how to add a new provider (Nick Craig-Wood)
* Fix no error being returned when creating a bucket we don't own (Nick Craig-Wood)
* Emit a debug message if anonymous credentials are in use (Nick Craig-Wood)
* Add `--s3-disable-multipart-uploads` flag (Nick Craig-Wood)
* Detect looping when using gcs and versions (Nick Craig-Wood)
* SFTP
* Implement `--sftp-copy-is-hardlink` to server side copy as hardlink (Nick Craig-Wood)
* Smb
* Fix incorrect `about` size by switching to `github.com/cloudsoda/go-smb2` fork (Nick Craig-Wood)
* Fix modtime of multithread uploads by setting PartialUploads (Nick Craig-Wood)
* WebDAV
* Added an rclone vendor to work with `rclone serve webdav` (Adithya Kumar)
## v1.64.2 - 2023-10-19 ## v1.64.2 - 2023-10-19
[See commits](https://github.com/rclone/rclone/compare/v1.64.1...v1.64.2) [See commits](https://github.com/rclone/rclone/compare/v1.64.1...v1.64.2)

View File

@ -30,7 +30,7 @@ rclone [flags]
--acd-auth-url string Auth server URL --acd-auth-url string Auth server URL
--acd-client-id string OAuth Client Id --acd-client-id string OAuth Client Id
--acd-client-secret string OAuth Client Secret --acd-client-secret string OAuth Client Secret
--acd-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot) --acd-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink (default 9Gi) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink (default 9Gi)
--acd-token string OAuth Access Token as a JSON blob --acd-token string OAuth Access Token as a JSON blob
--acd-token-url string Token server url --acd-token-url string Token server url
@ -38,7 +38,7 @@ rclone [flags]
--alias-remote string Remote or path to alias --alias-remote string Remote or path to alias
--ask-password Allow prompt for password for encrypted configuration (default true) --ask-password Allow prompt for password for encrypted configuration (default true)
--auto-confirm If enabled, do not request console confirmation --auto-confirm If enabled, do not request console confirmation
--azureblob-access-tier string Access tier of blob: hot, cool or archive --azureblob-access-tier string Access tier of blob: hot, cool, cold or archive
--azureblob-account string Azure Storage Account Name --azureblob-account string Azure Storage Account Name
--azureblob-archive-tier-delete Delete archive tier blobs before overwriting --azureblob-archive-tier-delete Delete archive tier blobs before overwriting
--azureblob-chunk-size SizeSuffix Upload chunk size (default 4Mi) --azureblob-chunk-size SizeSuffix Upload chunk size (default 4Mi)
@ -49,7 +49,7 @@ rclone [flags]
--azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth --azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth
--azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created --azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created
--azureblob-disable-checksum Don't store MD5 checksum with object metadata --azureblob-disable-checksum Don't store MD5 checksum with object metadata
--azureblob-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8) --azureblob-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-env-auth Read credentials from runtime (environment variables, CLI or MSI) --azureblob-env-auth Read credentials from runtime (environment variables, CLI or MSI)
--azureblob-key string Storage Account Shared Key --azureblob-key string Storage Account Shared Key
@ -69,18 +69,43 @@ rclone [flags]
--azureblob-use-emulator Uses local storage emulator if provided as 'true' --azureblob-use-emulator Uses local storage emulator if provided as 'true'
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure) --azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--azureblob-username string User name (usually an email address) --azureblob-username string User name (usually an email address)
--azurefiles-account string Azure Storage Account Name
--azurefiles-chunk-size SizeSuffix Upload chunk size (default 4Mi)
--azurefiles-client-certificate-password string Password for the certificate file (optional) (obscured)
--azurefiles-client-certificate-path string Path to a PEM or PKCS12 certificate file including the private key
--azurefiles-client-id string The ID of the client in use
--azurefiles-client-secret string One of the service principal's client secrets
--azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth
--azurefiles-connection-string string Azure Files Connection String
--azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot)
--azurefiles-endpoint string Endpoint for the service
--azurefiles-env-auth Read credentials from runtime (environment variables, CLI or MSI)
--azurefiles-key string Storage Account Shared Key
--azurefiles-max-stream-size SizeSuffix Max size for streamed files (default 10Gi)
--azurefiles-msi-client-id string Object ID of the user-assigned MSI to use, if any
--azurefiles-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any
--azurefiles-msi-object-id string Object ID of the user-assigned MSI to use, if any
--azurefiles-password string The user's password (obscured)
--azurefiles-sas-url string SAS URL
--azurefiles-service-principal-file string Path to file containing credentials for use with a service principal
--azurefiles-share-name string Azure Files Share Name
--azurefiles-tenant string ID of the service principal's tenant. Also called its directory ID
--azurefiles-upload-concurrency int Concurrency for multipart uploads (default 16)
--azurefiles-use-msi Use a managed service identity to authenticate (only works in Azure)
--azurefiles-username string User name (usually an email address)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size (default 96Mi) --b2-chunk-size SizeSuffix Upload chunk size (default 96Mi)
--b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4Gi) --b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4Gi)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d (default 1w) --b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d (default 1w)
--b2-download-url string Custom endpoint for downloads --b2-download-url string Custom endpoint for downloads
--b2-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --b2-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--b2-endpoint string Endpoint for the service --b2-endpoint string Endpoint for the service
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files --b2-hard-delete Permanently delete files on remote removal, otherwise hide files
--b2-key string Application Key --b2-key string Application Key
--b2-lifecycle int Set the number of days deleted files should be kept when creating a bucket
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging
--b2-upload-concurrency int Concurrency for multipart uploads (default 16) --b2-upload-concurrency int Concurrency for multipart uploads (default 4)
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--b2-version-at Time Show file versions as they were at the specified time (default off) --b2-version-at Time Show file versions as they were at the specified time (default off)
--b2-versions Include old versions in directory listings --b2-versions Include old versions in directory listings
@ -93,7 +118,7 @@ rclone [flags]
--box-client-id string OAuth Client Id --box-client-id string OAuth Client Id
--box-client-secret string OAuth Client Secret --box-client-secret string OAuth Client Secret
--box-commit-retries int Max number of times to try committing a multipart file (default 100) --box-commit-retries int Max number of times to try committing a multipart file (default 100)
--box-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot) --box-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
--box-impersonate string Impersonate this user ID when using a service account --box-impersonate string Impersonate this user ID when using a service account
--box-list-chunk int Size of listing chunk 1-1000 (default 1000) --box-list-chunk int Size of listing chunk 1-1000 (default 1000)
--box-owned-by string Only show items owned by the login (email address) passed in --box-owned-by string Only show items owned by the login (email address) passed in
@ -135,7 +160,7 @@ rclone [flags]
--chunker-remote string Remote to chunk/unchunk --chunker-remote string Remote to chunk/unchunk
--client-cert string Client SSL certificate (PEM) for mutual TLS auth --client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth --client-key string Client SSL private key (PEM) for mutual TLS auth
--color string When to show colors (and other ANSI codes) AUTO|NEVER|ALWAYS (default "AUTO") --color AUTO|NEVER|ALWAYS When to show colors (and other ANSI codes) AUTO|NEVER|ALWAYS (default AUTO)
--combine-upstreams SpaceSepList Upstreams for combining --combine-upstreams SpaceSepList Upstreams for combining
--compare-dest stringArray Include additional comma separated server-side paths during comparison --compare-dest stringArray Include additional comma separated server-side paths during comparison
--compress-level int GZIP compression level (-2 to 9) (default -1) --compress-level int GZIP compression level (-2 to 9) (default -1)
@ -158,7 +183,7 @@ rclone [flags]
--crypt-server-side-across-configs Deprecated: use --server-side-across-configs instead --crypt-server-side-across-configs Deprecated: use --server-side-across-configs instead
--crypt-show-mapping For all files listed show how the names encrypt --crypt-show-mapping For all files listed show how the names encrypt
--crypt-suffix string If this is set it will override the default suffix of ".bin" (default ".bin") --crypt-suffix string If this is set it will override the default suffix of ".bin" (default ".bin")
--cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--delete-after When synchronizing, delete files on destination after transferring (default) --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-before When synchronizing, delete files on destination before transferring --delete-before When synchronizing, delete files on destination before transferring
@ -176,7 +201,7 @@ rclone [flags]
--drive-client-secret string OAuth Client Secret --drive-client-secret string OAuth Client Secret
--drive-copy-shortcut-content Server side copy contents of shortcuts instead of the shortcut --drive-copy-shortcut-content Server side copy contents of shortcuts instead of the shortcut
--drive-disable-http2 Disable drive using http2 (default true) --drive-disable-http2 Disable drive using http2 (default true)
--drive-encoding MultiEncoder The encoding for the backend (default InvalidUtf8) --drive-encoding Encoding The encoding for the backend (default InvalidUtf8)
--drive-env-auth Get IAM credentials from runtime (environment variables or instance meta data if no env vars) --drive-env-auth Get IAM credentials from runtime (environment variables or instance meta data if no env vars)
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs (default "docx,xlsx,pptx,svg") --drive-export-formats string Comma separated list of preferred formats for downloading Google docs (default "docx,xlsx,pptx,svg")
--drive-fast-list-bug-fix Work around a bug in Google Drive listing (default true) --drive-fast-list-bug-fix Work around a bug in Google Drive listing (default true)
@ -185,17 +210,21 @@ rclone [flags]
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs --drive-import-formats string Comma separated list of preferred formats for uploading Google docs
--drive-keep-revision-forever Keep new head revision of each file forever --drive-keep-revision-forever Keep new head revision of each file forever
--drive-list-chunk int Size of listing chunk 100-1000, 0 to disable (default 1000) --drive-list-chunk int Size of listing chunk 100-1000, 0 to disable (default 1000)
--drive-metadata-labels Bits Control whether labels should be read or written in metadata (default off)
--drive-metadata-owner Bits Control whether owner should be read or written in metadata (default read)
--drive-metadata-permissions Bits Control whether permissions should be read or written in metadata (default off)
--drive-pacer-burst int Number of API calls to allow without sleeping (default 100) --drive-pacer-burst int Number of API calls to allow without sleeping (default 100)
--drive-pacer-min-sleep Duration Minimum time to sleep between API calls (default 100ms) --drive-pacer-min-sleep Duration Minimum time to sleep between API calls (default 100ms)
--drive-resource-key string Resource key for accessing a link-shared file --drive-resource-key string Resource key for accessing a link-shared file
--drive-root-folder-id string ID of the root folder --drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive --drive-scope string Comma separated list of scopes that rclone should use when requesting access from drive
--drive-server-side-across-configs Deprecated: use --server-side-across-configs instead --drive-server-side-across-configs Deprecated: use --server-side-across-configs instead
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path --drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me --drive-shared-with-me Only show files that are shared with me
--drive-show-all-gdocs Show all Google Docs including non-exportable ones in listings
--drive-size-as-quota Show sizes as storage quota usage, not actual size --drive-size-as-quota Show sizes as storage quota usage, not actual size
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only --drive-skip-checksum-gphotos Skip checksums on Google photos and videos only
--drive-skip-dangling-shortcuts If set skip dangling shortcut files --drive-skip-dangling-shortcuts If set skip dangling shortcut files
--drive-skip-gdocs Skip google documents in all listings --drive-skip-gdocs Skip google documents in all listings
--drive-skip-shortcuts If set skip shortcut files --drive-skip-shortcuts If set skip shortcut files
@ -219,7 +248,7 @@ rclone [flags]
--dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi) --dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi)
--dropbox-client-id string OAuth Client Id --dropbox-client-id string OAuth Client Id
--dropbox-client-secret string OAuth Client Secret --dropbox-client-secret string OAuth Client Secret
--dropbox-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot) --dropbox-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
--dropbox-impersonate string Impersonate this user when using a business account --dropbox-impersonate string Impersonate this user when using a business account
--dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms) --dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms)
--dropbox-shared-files Instructs rclone to work on individual shared files --dropbox-shared-files Instructs rclone to work on individual shared files
@ -228,7 +257,7 @@ rclone [flags]
--dropbox-token-url string Token server url --dropbox-token-url string Token server url
-n, --dry-run Do a trial run with no permanent changes -n, --dry-run Do a trial run with no permanent changes
--dscp string Set DSCP value to connections, value or name, e.g. CS1, LE, DF, AF21 --dscp string Set DSCP value to connections, value or name, e.g. CS1, LE, DF, AF21
--dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump DumpFlags List of items to dump from: headers, bodies, requests, responses, auth, filters, goroutines, openfiles, mapper
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info --dump-headers Dump HTTP headers - may contain sensitive info
--error-on-no-transfer Sets exit code 9 if no files are transferred, useful in scripts --error-on-no-transfer Sets exit code 9 if no files are transferred, useful in scripts
@ -239,11 +268,11 @@ rclone [flags]
--fast-list Use recursive list if available; uses more memory but fewer transactions --fast-list Use recursive list if available; uses more memory but fewer transactions
--fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl --fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
--fichier-cdn Set if you wish to use CDN download links --fichier-cdn Set if you wish to use CDN download links
--fichier-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot) --fichier-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
--fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured) --fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured)
--fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured) --fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured)
--fichier-shared-folder string If you want to download a shared folder, add this parameter --fichier-shared-folder string If you want to download a shared folder, add this parameter
--filefabric-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot) --filefabric-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
--filefabric-permanent-token string Permanent Authentication Token --filefabric-permanent-token string Permanent Authentication Token
--filefabric-root-folder-id string ID of the root folder --filefabric-root-folder-id string ID of the root folder
--filefabric-token string Session Token --filefabric-token string Session Token
@ -263,7 +292,7 @@ rclone [flags]
--ftp-disable-mlsd Disable using MLSD even if server advertises support --ftp-disable-mlsd Disable using MLSD even if server advertises support
--ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS) --ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS)
--ftp-disable-utf8 Disable using UTF-8 even if server advertises support --ftp-disable-utf8 Disable using UTF-8 even if server advertises support
--ftp-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot) --ftp-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS) --ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
--ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD --ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD
--ftp-host string FTP host to connect to --ftp-host string FTP host to connect to
@ -285,7 +314,7 @@ rclone [flags]
--gcs-client-secret string OAuth Client Secret --gcs-client-secret string OAuth Client Secret
--gcs-decompress If set this will decompress gzip encoded objects --gcs-decompress If set this will decompress gzip encoded objects
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created --gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
--gcs-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-endpoint string Endpoint for the service --gcs-endpoint string Endpoint for the service
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars) --gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets --gcs-location string Location for the newly created buckets
@ -298,9 +327,13 @@ rclone [flags]
--gcs-token-url string Token server url --gcs-token-url string Token server url
--gcs-user-project string User project --gcs-user-project string User project
--gphotos-auth-url string Auth server URL --gphotos-auth-url string Auth server URL
--gphotos-batch-commit-timeout Duration Max time to wait for a batch to finish committing (default 10m0s)
--gphotos-batch-mode string Upload file batching sync|async|off (default "sync")
--gphotos-batch-size int Max number of files in upload batch
--gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
--gphotos-client-id string OAuth Client Id --gphotos-client-id string OAuth Client Id
--gphotos-client-secret string OAuth Client Secret --gphotos-client-secret string OAuth Client Secret
--gphotos-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gphotos-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gphotos-include-archived Also view and download archived media --gphotos-include-archived Also view and download archived media
--gphotos-read-only Set to make the Google Photos backend read only --gphotos-read-only Set to make the Google Photos backend read only
--gphotos-read-size Set to read the size of media items --gphotos-read-size Set to read the size of media items
@ -312,8 +345,8 @@ rclone [flags]
--hasher-max-age Duration Maximum time to keep checksums in cache (0 = no cache, off = cache forever) (default off) --hasher-max-age Duration Maximum time to keep checksums in cache (0 = no cache, off = cache forever) (default off)
--hasher-remote string Remote to cache checksums for (e.g. myRemote:path) --hasher-remote string Remote to cache checksums for (e.g. myRemote:path)
--hdfs-data-transfer-protection string Kerberos data transfer protection: authentication|integrity|privacy --hdfs-data-transfer-protection string Kerberos data transfer protection: authentication|integrity|privacy
--hdfs-encoding MultiEncoder The encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot) --hdfs-encoding Encoding The encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot)
--hdfs-namenode string Hadoop name node and port --hdfs-namenode CommaSepList Hadoop name nodes and ports
--hdfs-service-principal-name string Kerberos service principal name for the namenode --hdfs-service-principal-name string Kerberos service principal name for the namenode
--hdfs-username string Hadoop user name --hdfs-username string Hadoop user name
--header stringArray Set HTTP header for all transactions --header stringArray Set HTTP header for all transactions
@ -325,7 +358,7 @@ rclone [flags]
--hidrive-client-id string OAuth Client Id --hidrive-client-id string OAuth Client Id
--hidrive-client-secret string OAuth Client Secret --hidrive-client-secret string OAuth Client Secret
--hidrive-disable-fetching-member-count Do not fetch number of objects in directories unless it is absolutely necessary --hidrive-disable-fetching-member-count Do not fetch number of objects in directories unless it is absolutely necessary
--hidrive-encoding MultiEncoder The encoding for the backend (default Slash,Dot) --hidrive-encoding Encoding The encoding for the backend (default Slash,Dot)
--hidrive-endpoint string Endpoint for the service (default "https://api.hidrive.strato.com/2.1") --hidrive-endpoint string Endpoint for the service (default "https://api.hidrive.strato.com/2.1")
--hidrive-root-prefix string The root/parent folder for all paths (default "/") --hidrive-root-prefix string The root/parent folder for all paths (default "/")
--hidrive-scope-access string Access permissions that rclone should use when requesting access from HiDrive (default "rw") --hidrive-scope-access string Access permissions that rclone should use when requesting access from HiDrive (default "rw")
@ -344,8 +377,15 @@ rclone [flags]
--ignore-checksum Skip post copy check of checksums --ignore-checksum Skip post copy check of checksums
--ignore-errors Delete even if there are I/O errors --ignore-errors Delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination --ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files -I, --ignore-times Don't skip files that match size and time - transfer all files
--imagekit-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket)
--imagekit-endpoint string You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
--imagekit-only-signed Restrict unsigned image URLs If you have configured Restrict unsigned image URLs in your dashboard settings, set this to true
--imagekit-private-key string You can find your ImageKit.io private key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
--imagekit-public-key string You can find your ImageKit.io public key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
--imagekit-upload-tags string Tags to add to the uploaded files, e.g. "tag1,tag2"
--imagekit-versions Include old versions in directory listings
--immutable Do not modify files, fail if existing files have been modified --immutable Do not modify files, fail if existing files have been modified
--include stringArray Include files matching pattern --include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin) --include-from stringArray Read file include patterns from file (use - to read from stdin)
@ -353,7 +393,7 @@ rclone [flags]
-i, --interactive Enable interactive mode -i, --interactive Enable interactive mode
--internetarchive-access-key-id string IAS3 Access Key --internetarchive-access-key-id string IAS3 Access Key
--internetarchive-disable-checksum Don't ask the server to test against MD5 checksum calculated by rclone (default true) --internetarchive-disable-checksum Don't ask the server to test against MD5 checksum calculated by rclone (default true)
--internetarchive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot) --internetarchive-encoding Encoding The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
--internetarchive-endpoint string IAS3 Endpoint (default "https://s3.us.archive.org") --internetarchive-endpoint string IAS3 Endpoint (default "https://s3.us.archive.org")
--internetarchive-front-endpoint string Host of InternetArchive Frontend (default "https://archive.org") --internetarchive-front-endpoint string Host of InternetArchive Frontend (default "https://archive.org")
--internetarchive-secret-access-key string IAS3 Secret Key (password) --internetarchive-secret-access-key string IAS3 Secret Key (password)
@ -361,7 +401,7 @@ rclone [flags]
--jottacloud-auth-url string Auth server URL --jottacloud-auth-url string Auth server URL
--jottacloud-client-id string OAuth Client Id --jottacloud-client-id string OAuth Client Id
--jottacloud-client-secret string OAuth Client Secret --jottacloud-client-secret string OAuth Client Secret
--jottacloud-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot) --jottacloud-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash --jottacloud-hard-delete Delete files permanently rather than putting them into the trash
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi)
--jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them --jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them
@ -369,7 +409,7 @@ rclone [flags]
--jottacloud-token-url string Token server url --jottacloud-token-url string Token server url
--jottacloud-trashed-only Only show files that are in the trash --jottacloud-trashed-only Only show files that are in the trash
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's (default 10Mi) --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's (default 10Mi)
--koofr-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --koofr-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--koofr-endpoint string The Koofr API endpoint to use --koofr-endpoint string The Koofr API endpoint to use
--koofr-mountid string Mount ID of the mount to use --koofr-mountid string Mount ID of the mount to use
--koofr-password string Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) (obscured) --koofr-password string Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) (obscured)
@ -377,10 +417,11 @@ rclone [flags]
--koofr-setmtime Does the backend support setting modification time (default true) --koofr-setmtime Does the backend support setting modification time (default true)
--koofr-user string Your user name --koofr-user string Your user name
--kv-lock-time Duration Maximum time to keep key-value database locked by process (default 1s) --kv-lock-time Duration Maximum time to keep key-value database locked by process (default 1s)
--linkbox-token string Token from https://www.linkbox.to/admin/account
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-case-insensitive Force the filesystem to report itself as case insensitive --local-case-insensitive Force the filesystem to report itself as case insensitive
--local-case-sensitive Force the filesystem to report itself as case sensitive --local-case-sensitive Force the filesystem to report itself as case sensitive
--local-encoding MultiEncoder The encoding for the backend (default Slash,Dot) --local-encoding Encoding The encoding for the backend (default Slash,Dot)
--local-no-check-updated Don't check to see if the files change during upload --local-no-check-updated Don't check to see if the files change during upload
--local-no-preallocate Disable preallocation of disk space for transferred files --local-no-preallocate Disable preallocation of disk space for transferred files
--local-no-set-modtime Disable setting modtime --local-no-set-modtime Disable setting modtime
@ -390,14 +431,14 @@ rclone [flags]
--local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated) --local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated)
--log-file string Log everything to this file --log-file string Log everything to this file
--log-format string Comma separated list of log format options (default "date,time") --log-format string Comma separated list of log format options (default "date,time")
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --log-level LogLevel Log level DEBUG|INFO|NOTICE|ERROR (default NOTICE)
--log-systemd Activate systemd integration for the logger --log-systemd Activate systemd integration for the logger
--low-level-retries int Number of low level retries to do (default 10) --low-level-retries int Number of low level retries to do (default 10)
--mailru-auth-url string Auth server URL --mailru-auth-url string Auth server URL
--mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true) --mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true)
--mailru-client-id string OAuth Client Id --mailru-client-id string OAuth Client Id
--mailru-client-secret string OAuth Client Secret --mailru-client-secret string OAuth Client Secret
--mailru-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot) --mailru-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--mailru-pass string Password (obscured) --mailru-pass string Password (obscured)
--mailru-speedup-enable Skip full upload if there is another file with same data hash (default true) --mailru-speedup-enable Skip full upload if there is another file with same data hash (default true)
--mailru-speedup-file-patterns string Comma separated list of file name patterns eligible for speedup (put by hash) (default "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf") --mailru-speedup-file-patterns string Comma separated list of file name patterns eligible for speedup (put by hash) (default "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf")
@ -416,7 +457,7 @@ rclone [flags]
--max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000) --max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000)
--max-transfer SizeSuffix Maximum size of data to transfer (default off) --max-transfer SizeSuffix Maximum size of data to transfer (default off)
--mega-debug Output more debug from Mega --mega-debug Output more debug from Mega
--mega-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot) --mega-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--mega-hard-delete Delete files permanently rather than putting them into the trash --mega-hard-delete Delete files permanently rather than putting them into the trash
--mega-pass string Password (obscured) --mega-pass string Password (obscured)
--mega-use-https Use HTTPS for transfers --mega-use-https Use HTTPS for transfers
@ -429,6 +470,7 @@ rclone [flags]
--metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
--metadata-include stringArray Include metadatas matching pattern --metadata-include stringArray Include metadatas matching pattern
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
--metadata-mapper SpaceSepList Program to run to transforming metadata before upload
--metadata-set stringArray Add metadata key=value when uploading --metadata-set stringArray Add metadata key=value when uploading
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
@ -447,7 +489,7 @@ rclone [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip --no-gzip-encoding Don't set Accept-Encoding: gzip
--no-traverse Don't traverse destination file system on copy --no-traverse Don't traverse destination file system on copy
--no-unicode-normalization Don't normalize unicode characters in filenames --no-unicode-normalization Don't normalize unicode characters in filenames
--no-update-modtime Don't update destination mod-time if files identical --no-update-modtime Don't update destination modtime if files identical
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only)
--onedrive-access-scopes SpaceSepList Set scopes to be requested by rclone (default Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access) --onedrive-access-scopes SpaceSepList Set scopes to be requested by rclone (default Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access)
--onedrive-auth-url string Auth server URL --onedrive-auth-url string Auth server URL
@ -455,9 +497,10 @@ rclone [flags]
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi)
--onedrive-client-id string OAuth Client Id --onedrive-client-id string OAuth Client Id
--onedrive-client-secret string OAuth Client Secret --onedrive-client-secret string OAuth Client Secret
--onedrive-delta If set rclone will use delta listing to implement recursive listings
--onedrive-drive-id string The ID of the drive to use --onedrive-drive-id string The ID of the drive to use
--onedrive-drive-type string The type of the drive (personal | business | documentLibrary) --onedrive-drive-type string The type of the drive (personal | business | documentLibrary)
--onedrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot) --onedrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings
--onedrive-hash-type string Specify the hash in use for the backend (default "auto") --onedrive-hash-type string Specify the hash in use for the backend (default "auto")
--onedrive-link-password string Set the password for links created by the link command --onedrive-link-password string Set the password for links created by the link command
@ -478,7 +521,7 @@ rclone [flags]
--oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi) --oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
--oos-copy-timeout Duration Timeout for copy (default 1m0s) --oos-copy-timeout Duration Timeout for copy (default 1m0s)
--oos-disable-checksum Don't store MD5 checksum with object metadata --oos-disable-checksum Don't store MD5 checksum with object metadata
--oos-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot) --oos-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--oos-endpoint string Endpoint for Object storage API --oos-endpoint string Endpoint for Object storage API
--oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery --oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery
--oos-max-upload-parts int Maximum number of parts in a multipart upload (default 10000) --oos-max-upload-parts int Maximum number of parts in a multipart upload (default 10000)
@ -495,15 +538,16 @@ rclone [flags]
--oos-upload-concurrency int Concurrency for multipart uploads (default 10) --oos-upload-concurrency int Concurrency for multipart uploads (default 10)
--oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi) --opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
--opendrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot) --opendrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
--opendrive-password string Password (obscured) --opendrive-password string Password (obscured)
--opendrive-username string Username --opendrive-username string Username
--order-by string Instructions on how to order the transfers, e.g. 'size,descending' --order-by string Instructions on how to order the transfers, e.g. 'size,descending'
--partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--password-command SpaceSepList Command for supplying password for encrypted configuration --password-command SpaceSepList Command for supplying password for encrypted configuration
--pcloud-auth-url string Auth server URL --pcloud-auth-url string Auth server URL
--pcloud-client-id string OAuth Client Id --pcloud-client-id string OAuth Client Id
--pcloud-client-secret string OAuth Client Secret --pcloud-client-secret string OAuth Client Secret
--pcloud-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --pcloud-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--pcloud-hostname string Hostname to connect to (default "api.pcloud.com") --pcloud-hostname string Hostname to connect to (default "api.pcloud.com")
--pcloud-password string Your pcloud password (obscured) --pcloud-password string Your pcloud password (obscured)
--pcloud-root-folder-id string Fill in for rclone to use a non root folder as its starting point (default "d0") --pcloud-root-folder-id string Fill in for rclone to use a non root folder as its starting point (default "d0")
@ -513,7 +557,7 @@ rclone [flags]
--pikpak-auth-url string Auth server URL --pikpak-auth-url string Auth server URL
--pikpak-client-id string OAuth Client Id --pikpak-client-id string OAuth Client Id
--pikpak-client-secret string OAuth Client Secret --pikpak-client-secret string OAuth Client Secret
--pikpak-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot) --pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
--pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi) --pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
--pikpak-pass string Pikpak password (obscured) --pikpak-pass string Pikpak password (obscured)
--pikpak-root-folder-id string ID of the root folder --pikpak-root-folder-id string ID of the root folder
@ -525,7 +569,7 @@ rclone [flags]
--premiumizeme-auth-url string Auth server URL --premiumizeme-auth-url string Auth server URL
--premiumizeme-client-id string OAuth Client Id --premiumizeme-client-id string OAuth Client Id
--premiumizeme-client-secret string OAuth Client Secret --premiumizeme-client-secret string OAuth Client Secret
--premiumizeme-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot) --premiumizeme-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--premiumizeme-token string OAuth Access Token as a JSON blob --premiumizeme-token string OAuth Access Token as a JSON blob
--premiumizeme-token-url string Token server url --premiumizeme-token-url string Token server url
-P, --progress Show progress during transfer -P, --progress Show progress during transfer
@ -533,7 +577,7 @@ rclone [flags]
--protondrive-2fa string The 2FA code --protondrive-2fa string The 2FA code
--protondrive-app-version string The app version string (default "macos-drive@1.0.0-alpha.1+rclone") --protondrive-app-version string The app version string (default "macos-drive@1.0.0-alpha.1+rclone")
--protondrive-enable-caching Caches the files and folders metadata to reduce API calls (default true) --protondrive-enable-caching Caches the files and folders metadata to reduce API calls (default true)
--protondrive-encoding MultiEncoder The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot) --protondrive-encoding Encoding The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot)
--protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured) --protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured)
--protondrive-original-file-size Return the file size before encryption (default true) --protondrive-original-file-size Return the file size before encryption (default true)
--protondrive-password string The password of your proton account (obscured) --protondrive-password string The password of your proton account (obscured)
@ -542,13 +586,13 @@ rclone [flags]
--putio-auth-url string Auth server URL --putio-auth-url string Auth server URL
--putio-client-id string OAuth Client Id --putio-client-id string OAuth Client Id
--putio-client-secret string OAuth Client Secret --putio-client-secret string OAuth Client Secret
--putio-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --putio-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--putio-token string OAuth Access Token as a JSON blob --putio-token string OAuth Access Token as a JSON blob
--putio-token-url string Token server url --putio-token-url string Token server url
--qingstor-access-key-id string QingStor Access Key ID --qingstor-access-key-id string QingStor Access Key ID
--qingstor-chunk-size SizeSuffix Chunk size to use for uploading (default 4Mi) --qingstor-chunk-size SizeSuffix Chunk size to use for uploading (default 4Mi)
--qingstor-connection-retries int Number of connection retries (default 3) --qingstor-connection-retries int Number of connection retries (default 3)
--qingstor-encoding MultiEncoder The encoding for the backend (default Slash,Ctl,InvalidUtf8) --qingstor-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8)
--qingstor-endpoint string Enter an endpoint URL to connection QingStor API --qingstor-endpoint string Enter an endpoint URL to connection QingStor API
--qingstor-env-auth Get QingStor credentials from runtime --qingstor-env-auth Get QingStor credentials from runtime
--qingstor-secret-access-key string QingStor Secret Access Key (password) --qingstor-secret-access-key string QingStor Secret Access Key (password)
@ -557,7 +601,7 @@ rclone [flags]
--qingstor-zone string Zone to connect to --qingstor-zone string Zone to connect to
--quatrix-api-key string API key for accessing Quatrix account --quatrix-api-key string API key for accessing Quatrix account
--quatrix-effective-upload-time string Wanted upload time for one chunk (default "4s") --quatrix-effective-upload-time string Wanted upload time for one chunk (default "4s")
--quatrix-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --quatrix-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--quatrix-hard-delete Delete files permanently rather than putting them into the trash --quatrix-hard-delete Delete files permanently rather than putting them into the trash
--quatrix-host string Host name of Quatrix account --quatrix-host string Host name of Quatrix account
--quatrix-maximal-summary-chunk-size SizeSuffix The maximal summary for all chunks. It should not be less than 'transfers'*'minimal_chunk_size' (default 95.367Mi) --quatrix-maximal-summary-chunk-size SizeSuffix The maximal summary for all chunks. It should not be less than 'transfers'*'minimal_chunk_size' (default 95.367Mi)
@ -604,7 +648,7 @@ rclone [flags]
--s3-disable-checksum Don't store MD5 checksum with object metadata --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends --s3-disable-http2 Disable usage of http2 for S3 backends
--s3-download-url string Custom endpoint for downloads --s3-download-url string Custom endpoint for downloads
--s3-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot) --s3-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--s3-endpoint string Endpoint for S3 API --s3-endpoint string Endpoint for S3 API
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars) --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars)
--s3-force-path-style If true use path style access if false use virtual hosted style (default true) --s3-force-path-style If true use path style access if false use virtual hosted style (default true)
@ -638,14 +682,16 @@ rclone [flags]
--s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint
--s3-use-accept-encoding-gzip Accept-Encoding: gzip Whether to send Accept-Encoding: gzip header (default unset) --s3-use-accept-encoding-gzip Accept-Encoding: gzip Whether to send Accept-Encoding: gzip header (default unset)
--s3-use-already-exists Tristate Set if rclone should report BucketAlreadyExists errors on bucket creation (default unset)
--s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset) --s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset)
--s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset)
--s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads --s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads
--s3-v2-auth If true use v2 authentication --s3-v2-auth If true use v2 authentication
--s3-version-at Time Show file versions as they were at the specified time (default off) --s3-version-at Time Show file versions as they were at the specified time (default off)
--s3-versions Include old versions in directory listings --s3-versions Include old versions in directory listings
--seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled) --seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled)
--seafile-create-library Should rclone create a library if it doesn't exist --seafile-create-library Should rclone create a library if it doesn't exist
--seafile-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8) --seafile-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
--seafile-library string Name of the library --seafile-library string Name of the library
--seafile-library-key string Library password (for encrypted libraries only) (obscured) --seafile-library-key string Library password (for encrypted libraries only) (obscured)
--seafile-pass string Password (obscured) --seafile-pass string Password (obscured)
@ -656,6 +702,7 @@ rclone [flags]
--sftp-chunk-size SizeSuffix Upload and download chunk size (default 32Ki) --sftp-chunk-size SizeSuffix Upload and download chunk size (default 32Ki)
--sftp-ciphers SpaceSepList Space separated list of ciphers to be used for session encryption, ordered by preference --sftp-ciphers SpaceSepList Space separated list of ciphers to be used for session encryption, ordered by preference
--sftp-concurrency int The maximum number of outstanding requests for one file (default 64) --sftp-concurrency int The maximum number of outstanding requests for one file (default 64)
--sftp-copy-is-hardlink Set to enable server side copies using hardlinks
--sftp-disable-concurrent-reads If set don't use concurrent reads --sftp-disable-concurrent-reads If set don't use concurrent reads
--sftp-disable-concurrent-writes If set don't use concurrent writes --sftp-disable-concurrent-writes If set don't use concurrent writes
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available
@ -690,7 +737,7 @@ rclone [flags]
--sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi) --sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
--sharefile-client-id string OAuth Client Id --sharefile-client-id string OAuth Client Id
--sharefile-client-secret string OAuth Client Secret --sharefile-client-secret string OAuth Client Secret
--sharefile-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot) --sharefile-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
--sharefile-endpoint string Endpoint for API calls --sharefile-endpoint string Endpoint for API calls
--sharefile-root-folder-id string ID of the root folder --sharefile-root-folder-id string ID of the root folder
--sharefile-token string OAuth Access Token as a JSON blob --sharefile-token string OAuth Access Token as a JSON blob
@ -698,13 +745,13 @@ rclone [flags]
--sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 128Mi) --sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 128Mi)
--sia-api-password string Sia Daemon API Password (obscured) --sia-api-password string Sia Daemon API Password (obscured)
--sia-api-url string Sia daemon API URL, like http://sia.daemon.host:9980 (default "http://127.0.0.1:9980") --sia-api-url string Sia daemon API URL, like http://sia.daemon.host:9980 (default "http://127.0.0.1:9980")
--sia-encoding MultiEncoder The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot) --sia-encoding Encoding The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
--sia-user-agent string Siad User Agent (default "Sia-Agent") --sia-user-agent string Siad User Agent (default "Sia-Agent")
--size-only Skip based on size only, not mod-time or checksum --size-only Skip based on size only, not modtime or checksum
--skip-links Don't warn about skipped symlinks --skip-links Don't warn about skipped symlinks
--smb-case-insensitive Whether the server is configured to be case-insensitive (default true) --smb-case-insensitive Whether the server is configured to be case-insensitive (default true)
--smb-domain string Domain name for NTLM authentication (default "WORKGROUP") --smb-domain string Domain name for NTLM authentication (default "WORKGROUP")
--smb-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot) --smb-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot)
--smb-hide-special-share Hide special shares (e.g. print$) which users aren't supposed to access (default true) --smb-hide-special-share Hide special shares (e.g. print$) which users aren't supposed to access (default true)
--smb-host string SMB server hostname to connect to --smb-host string SMB server hostname to connect to
--smb-idle-timeout Duration Max time before closing idle connections (default 1m0s) --smb-idle-timeout Duration Max time before closing idle connections (default 1m0s)
@ -714,7 +761,7 @@ rclone [flags]
--smb-user string SMB username (default "$USER") --smb-user string SMB username (default "$USER")
--stats Duration Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s) --stats Duration Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats (0 for no limit) (default 45) --stats-file-name-length int Max file name length in stats (0 for no limit) (default 45)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-log-level LogLevel Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default INFO)
--stats-one-line Make the stats fit on one line --stats-one-line Make the stats fit on one line
--stats-one-line-date Enable --stats-one-line and add current date/time prefix --stats-one-line-date Enable --stats-one-line and add current date/time prefix
--stats-one-line-date-format string Enable --stats-one-line-date and use custom formatted date: Enclose date string in double quotes ("), see https://golang.org/pkg/time/#Time.Format --stats-one-line-date-format string Enable --stats-one-line-date and use custom formatted date: Enclose date string in double quotes ("), see https://golang.org/pkg/time/#Time.Format
@ -732,7 +779,7 @@ rclone [flags]
--sugarsync-authorization string Sugarsync authorization --sugarsync-authorization string Sugarsync authorization
--sugarsync-authorization-expiry string Sugarsync authorization expiry --sugarsync-authorization-expiry string Sugarsync authorization expiry
--sugarsync-deleted-id string Sugarsync deleted folder id --sugarsync-deleted-id string Sugarsync deleted folder id
--sugarsync-encoding MultiEncoder The encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot) --sugarsync-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot)
--sugarsync-hard-delete Permanently delete files if true --sugarsync-hard-delete Permanently delete files if true
--sugarsync-private-access-key string Sugarsync Private Access Key --sugarsync-private-access-key string Sugarsync Private Access Key
--sugarsync-refresh-token string Sugarsync refresh token --sugarsync-refresh-token string Sugarsync refresh token
@ -746,7 +793,7 @@ rclone [flags]
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8) --swift-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form --swift-env-auth Get swift credentials from environment variables in standard OpenStack form
--swift-key string API key or password (OS_PASSWORD) --swift-key string API key or password (OS_PASSWORD)
@ -778,13 +825,13 @@ rclone [flags]
--union-upstreams string List of space separated upstreams --union-upstreams string List of space separated upstreams
-u, --update Skip files that are newer on the destination -u, --update Skip files that are newer on the destination
--uptobox-access-token string Your access token --uptobox-access-token string Your access token
--uptobox-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot) --uptobox-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
--uptobox-private Set to make uploaded files private --uptobox-private Set to make uploaded files private
--use-cookies Enable session cookiejar --use-cookies Enable session cookiejar
--use-json-log Use json log format --use-json-log Use json log format
--use-mmap Use mmap allocator (see docs) --use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata --use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default "rclone/v1.64.0") --user-agent string Set the user-agent to a specified string (default "rclone/v1.65.0")
-v, --verbose count Print lots more stuff (repeat for more) -v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number -V, --version Print the version number
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon) --webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
@ -800,14 +847,14 @@ rclone [flags]
--yandex-auth-url string Auth server URL --yandex-auth-url string Auth server URL
--yandex-client-id string OAuth Client Id --yandex-client-id string OAuth Client Id
--yandex-client-secret string OAuth Client Secret --yandex-client-secret string OAuth Client Secret
--yandex-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot) --yandex-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
--yandex-hard-delete Delete files permanently rather than putting them into the trash --yandex-hard-delete Delete files permanently rather than putting them into the trash
--yandex-token string OAuth Access Token as a JSON blob --yandex-token string OAuth Access Token as a JSON blob
--yandex-token-url string Token server url --yandex-token-url string Token server url
--zoho-auth-url string Auth server URL --zoho-auth-url string Auth server URL
--zoho-client-id string OAuth Client Id --zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret --zoho-client-secret string OAuth Client Secret
--zoho-encoding MultiEncoder The encoding for the backend (default Del,Ctl,InvalidUtf8) --zoho-encoding Encoding The encoding for the backend (default Del,Ctl,InvalidUtf8)
--zoho-region string Zoho region to connect to --zoho-region string Zoho region to connect to
--zoho-token string OAuth Access Token as a JSON blob --zoho-token string OAuth Access Token as a JSON blob
--zoho-token-url string Token server url --zoho-token-url string Token server url
@ -821,7 +868,7 @@ rclone [flags]
* [rclone bisync](/commands/rclone_bisync/) - Perform bidirectional synchronization between two paths. * [rclone bisync](/commands/rclone_bisync/) - Perform bidirectional synchronization between two paths.
* [rclone cat](/commands/rclone_cat/) - Concatenates any files and sends them to stdout. * [rclone cat](/commands/rclone_cat/) - Concatenates any files and sends them to stdout.
* [rclone check](/commands/rclone_check/) - Checks the files in the source and destination match. * [rclone check](/commands/rclone_check/) - Checks the files in the source and destination match.
* [rclone checksum](/commands/rclone_checksum/) - Checks the files in the source against a SUM file. * [rclone checksum](/commands/rclone_checksum/) - Checks the files in the destination against a SUM file.
* [rclone cleanup](/commands/rclone_cleanup/) - Clean up the remote if possible. * [rclone cleanup](/commands/rclone_cleanup/) - Clean up the remote if possible.
* [rclone completion](/commands/rclone_completion/) - Output completion script for a given shell. * [rclone completion](/commands/rclone_completion/) - Output completion script for a given shell.
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.

View File

@ -59,11 +59,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only). -c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison --compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
--cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing --ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums --ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination --ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files -I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified --immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename --inplace Download directly to destination file instead of atomic download to temp/rename
@ -78,11 +78,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless --no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy --no-traverse Don't traverse destination file system on copy
--no-update-modtime Don't update destination mod-time if files identical --no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending' --order-by string Instructions on how to order the transfers, e.g. 'size,descending'
--partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files --refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
--size-only Skip based on size only, not mod-time or checksum --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination -u, --update Skip files that are newer on the destination
``` ```

View File

@ -1,6 +1,6 @@
--- ---
title: "rclone checksum" title: "rclone checksum"
description: "Checks the files in the source against a SUM file." description: "Checks the files in the destination against a SUM file."
slug: rclone_checksum slug: rclone_checksum
url: /commands/rclone_checksum/ url: /commands/rclone_checksum/
groups: Filter,Listing groups: Filter,Listing
@ -9,17 +9,20 @@ versionIntroduced: v1.56
--- ---
# rclone checksum # rclone checksum
Checks the files in the source against a SUM file. Checks the files in the destination against a SUM file.
## Synopsis ## Synopsis
Checks that hashsums of source files match the SUM file. Checks that hashsums of destination files match the SUM file.
It compares hashes (MD5, SHA1, etc) and logs a report of files which It compares hashes (MD5, SHA1, etc) and logs a report of files which
don't match. It doesn't alter the file system. don't match. It doesn't alter the file system.
If you supply the `--download` flag, it will download the data from remote The sumfile is treated as the source and the dst:path is treated as
and calculate the contents hash on the fly. This can be useful for remotes the destination for the purposes of the output.
If you supply the `--download` flag, it will download the data from the remote
and calculate the content hash on the fly. This can be useful for remotes
that don't support hashes or if you really want to check all the data. that don't support hashes or if you really want to check all the data.
Note that hash values in the SUM file are treated as case insensitive. Note that hash values in the SUM file are treated as case insensitive.
@ -50,7 +53,7 @@ option for more information.
``` ```
rclone checksum <hash> sumfile src:path [flags] rclone checksum <hash> sumfile dst:path [flags]
``` ```
## Options ## Options

View File

@ -91,11 +91,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only). -c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison --compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
--cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing --ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums --ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination --ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files -I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified --immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename --inplace Download directly to destination file instead of atomic download to temp/rename
@ -110,11 +110,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless --no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy --no-traverse Don't traverse destination file system on copy
--no-update-modtime Don't update destination mod-time if files identical --no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending' --order-by string Instructions on how to order the transfers, e.g. 'size,descending'
--partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files --refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
--size-only Skip based on size only, not mod-time or checksum --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination -u, --update Skip files that are newer on the destination
``` ```

View File

@ -63,11 +63,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only). -c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison --compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
--cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing --ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums --ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination --ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files -I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified --immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename --inplace Download directly to destination file instead of atomic download to temp/rename
@ -82,11 +82,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless --no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy --no-traverse Don't traverse destination file system on copy
--no-update-modtime Don't update destination mod-time if files identical --no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending' --order-by string Instructions on how to order the transfers, e.g. 'size,descending'
--partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files --refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
--size-only Skip based on size only, not mod-time or checksum --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination -u, --update Skip files that are newer on the destination
``` ```

View File

@ -40,10 +40,6 @@ Run without a hash to see the list of all supported hashes, e.g.
* whirlpool * whirlpool
* crc32 * crc32
* sha256 * sha256
* dropbox
* hidrive
* mailru
* quickxor
Then Then
@ -53,7 +49,7 @@ Note that hash names are case insensitive and values are output in lower case.
``` ```
rclone hashsum <hash> remote:path [flags] rclone hashsum [<hash> remote:path] [flags]
``` ```
## Options ## Options

View File

@ -13,7 +13,6 @@ Mount the remote as file system on a mountpoint.
## Synopsis ## Synopsis
rclone mount allows Linux, FreeBSD, macOS and Windows to rclone mount allows Linux, FreeBSD, macOS and Windows to
mount any of Rclone's cloud storage systems as a file system with mount any of Rclone's cloud storage systems as a file system with
FUSE. FUSE.
@ -268,11 +267,17 @@ does not suffer from the same limitations.
## Mounting on macOS ## Mounting on macOS
Mounting on macOS can be done either via [macFUSE](https://osxfuse.github.io/) Mounting on macOS can be done either via [built-in NFS server](/commands/rclone_serve_nfs/), [macFUSE](https://osxfuse.github.io/)
(also known as osxfuse) or [FUSE-T](https://www.fuse-t.org/). macFUSE is a traditional (also known as osxfuse) or [FUSE-T](https://www.fuse-t.org/). macFUSE is a traditional
FUSE driver utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system FUSE driver utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system
which "mounts" via an NFSv4 local server. which "mounts" via an NFSv4 local server.
# NFS mount
This method spins up an NFS server using [serve nfs](/commands/rclone_serve_nfs/) command and mounts
it to the specified mountpoint. If you run this in background mode using |--daemon|, you will need to
send SIGTERM signal to the rclone process using |kill| command to stop the mount.
### macFUSE Notes ### macFUSE Notes
If installing macFUSE using [dmg packages](https://github.com/osxfuse/osxfuse/releases) from If installing macFUSE using [dmg packages](https://github.com/osxfuse/osxfuse/releases) from
@ -322,6 +327,8 @@ sequentially, it can only seek when reading. This means that many
applications won't work with their files on an rclone mount without applications won't work with their files on an rclone mount without
`--vfs-cache-mode writes` or `--vfs-cache-mode full`. `--vfs-cache-mode writes` or `--vfs-cache-mode full`.
See the [VFS File Caching](#vfs-file-caching) section for more info. See the [VFS File Caching](#vfs-file-caching) section for more info.
When using NFS mount on macOS, if you don't specify |--vfs-cache-mode|
the mount point will be read-only.
The bucket-based remotes (e.g. Swift, S3, Google Compute Storage, B2) The bucket-based remotes (e.g. Swift, S3, Google Compute Storage, B2)
do not support the concept of empty directories, so empty do not support the concept of empty directories, so empty
@ -468,7 +475,6 @@ Mount option syntax includes a few extra options treated specially:
- `vv...` will be transformed into appropriate `--verbose=N` - `vv...` will be transformed into appropriate `--verbose=N`
- standard mount options like `x-systemd.automount`, `_netdev`, `nosuid` and alike - standard mount options like `x-systemd.automount`, `_netdev`, `nosuid` and alike
are intended only for Automountd and ignored by rclone. are intended only for Automountd and ignored by rclone.
## VFS - Virtual File System ## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects This command uses the VFS layer. This adapts the cloud storage objects
@ -850,6 +856,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)

View File

@ -67,11 +67,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only). -c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison --compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
--cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing --ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums --ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination --ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files -I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified --immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename --inplace Download directly to destination file instead of atomic download to temp/rename
@ -86,11 +86,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless --no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy --no-traverse Don't traverse destination file system on copy
--no-update-modtime Don't update destination mod-time if files identical --no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending' --order-by string Instructions on how to order the transfers, e.g. 'size,descending'
--partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files --refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
--size-only Skip based on size only, not mod-time or checksum --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination -u, --update Skip files that are newer on the destination
``` ```

View File

@ -66,11 +66,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only). -c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison --compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
--cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing --ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums --ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination --ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files -I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified --immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename --inplace Download directly to destination file instead of atomic download to temp/rename
@ -85,11 +85,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless --no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy --no-traverse Don't traverse destination file system on copy
--no-update-modtime Don't update destination mod-time if files identical --no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending' --order-by string Instructions on how to order the transfers, e.g. 'size,descending'
--partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files --refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
--size-only Skip based on size only, not mod-time or checksum --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination -u, --update Skip files that are newer on the destination
``` ```

View File

@ -96,6 +96,17 @@ to be used within the template to server pages:
|-- .Size | Size in Bytes of the entry. | |-- .Size | Size in Bytes of the entry. |
|-- .ModTime | The UTC timestamp of an entry. | |-- .ModTime | The UTC timestamp of an entry. |
The server also makes the following functions available so that they can be used within the
template. These functions help extend the options for dynamic rendering of HTML. They can
be used to render HTML based on specific conditions.
| Function | Description |
| :---------- | :---------- |
| afterEpoch | Returns the time since the epoch for the given time. |
| contains | Checks whether a given substring is present or not in a given string. |
| hasPrefix | Checks whether the given string begins with the specified prefix. |
| hasSuffix | Checks whether the given string end with the specified suffix. |
### Authentication ### Authentication
By default this will serve files without needing a login. By default this will serve files without needing a login.

View File

@ -12,7 +12,6 @@ Update the rclone binary.
## Synopsis ## Synopsis
This command downloads the latest release of rclone and replaces the This command downloads the latest release of rclone and replaces the
currently running binary. The download is verified with a hashsum and currently running binary. The download is verified with a hashsum and
cryptographically signed signature; see [the release signing cryptographically signed signature; see [the release signing

View File

@ -40,7 +40,9 @@ See the [global flags page](/flags/) for global options not listed here.
* [rclone serve docker](/commands/rclone_serve_docker/) - Serve any remote on docker's volume plugin API. * [rclone serve docker](/commands/rclone_serve_docker/) - Serve any remote on docker's volume plugin API.
* [rclone serve ftp](/commands/rclone_serve_ftp/) - Serve remote:path over FTP. * [rclone serve ftp](/commands/rclone_serve_ftp/) - Serve remote:path over FTP.
* [rclone serve http](/commands/rclone_serve_http/) - Serve the remote over HTTP. * [rclone serve http](/commands/rclone_serve_http/) - Serve the remote over HTTP.
* [rclone serve nfs](/commands/rclone_serve_nfs/) - Serve the remote as an NFS mount
* [rclone serve restic](/commands/rclone_serve_restic/) - Serve the remote for restic's REST API. * [rclone serve restic](/commands/rclone_serve_restic/) - Serve the remote for restic's REST API.
* [rclone serve s3](/commands/rclone_serve_s3/) - Serve remote:path over s3.
* [rclone serve sftp](/commands/rclone_serve_sftp/) - Serve the remote over SFTP. * [rclone serve sftp](/commands/rclone_serve_sftp/) - Serve the remote over SFTP.
* [rclone serve webdav](/commands/rclone_serve_webdav/) - Serve remote:path over WebDAV. * [rclone serve webdav](/commands/rclone_serve_webdav/) - Serve remote:path over WebDAV.

View File

@ -36,7 +36,6 @@ default "rclone (hostname)".
Use `--log-trace` in conjunction with `-vv` to enable additional debug Use `--log-trace` in conjunction with `-vv` to enable additional debug
logging of all UPNP traffic. logging of all UPNP traffic.
## VFS - Virtual File System ## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects This command uses the VFS layer. This adapts the cloud storage objects
@ -405,6 +404,7 @@ rclone serve dlna remote:path [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)

View File

@ -13,7 +13,6 @@ Serve any remote on docker's volume plugin API.
## Synopsis ## Synopsis
This command implements the Docker volume plugin API allowing docker to use This command implements the Docker volume plugin API allowing docker to use
rclone as a data storage mechanism for various cloud providers. rclone as a data storage mechanism for various cloud providers.
rclone provides [docker volume plugin](/docker) based on it. rclone provides [docker volume plugin](/docker) based on it.
@ -52,7 +51,6 @@ directory with book-keeping records of created and mounted volumes.
All mount and VFS options are submitted by the docker daemon via API, but All mount and VFS options are submitted by the docker daemon via API, but
you can also provide defaults on the command line as well as set path to the you can also provide defaults on the command line as well as set path to the
config file and cache directory or adjust logging verbosity. config file and cache directory or adjust logging verbosity.
## VFS - Virtual File System ## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects This command uses the VFS layer. This adapts the cloud storage objects
@ -439,6 +437,7 @@ rclone serve docker [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)

View File

@ -33,7 +33,6 @@ then using Authentication is advised - see the next section for info.
By default this will serve files without needing a login. By default this will serve files without needing a login.
You can set a single username and password with the --user and --pass flags. You can set a single username and password with the --user and --pass flags.
## VFS - Virtual File System ## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects This command uses the VFS layer. This adapts the cloud storage objects
@ -486,6 +485,7 @@ rclone serve ftp remote:path [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)

View File

@ -97,6 +97,17 @@ to be used within the template to server pages:
|-- .Size | Size in Bytes of the entry. | |-- .Size | Size in Bytes of the entry. |
|-- .ModTime | The UTC timestamp of an entry. | |-- .ModTime | The UTC timestamp of an entry. |
The server also makes the following functions available so that they can be used within the
template. These functions help extend the options for dynamic rendering of HTML. They can
be used to render HTML based on specific conditions.
| Function | Description |
| :---------- | :---------- |
| afterEpoch | Returns the time since the epoch for the given time. |
| contains | Checks whether a given substring is present or not in a given string. |
| hasPrefix | Checks whether the given string begins with the specified prefix. |
| hasSuffix | Checks whether the given string end with the specified suffix. |
### Authentication ### Authentication
By default this will serve files without needing a login. By default this will serve files without needing a login.
@ -123,7 +134,6 @@ The password file can be updated while rclone is running.
Use `--realm` to set the authentication realm. Use `--realm` to set the authentication realm.
Use `--salt` to change the password hashing salt from the default. Use `--salt` to change the password hashing salt from the default.
## VFS - Virtual File System ## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects This command uses the VFS layer. This adapts the cloud storage objects
@ -585,6 +595,7 @@ rclone serve http remote:path [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)

View File

@ -0,0 +1,450 @@
---
title: "rclone serve nfs"
description: "Serve the remote as an NFS mount"
slug: rclone_serve_nfs
url: /commands/rclone_serve_nfs/
groups: Filter
versionIntroduced: v1.65
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/serve/nfs/ and as part of making a release run "make commanddocs"
---
# rclone serve nfs
Serve the remote as an NFS mount
## Synopsis
Create an NFS server that serves the given remote over the network.
The primary purpose for this command is to enable [mount command](/commands/rclone_mount/) on recent macOS versions where
installing FUSE is very cumbersome.
Since this is running on NFSv3, no authentication method is available. Any client
will be able to access the data. To limit access, you can use serve NFS on loopback address
and rely on secure tunnels (such as SSH). For this reason, by default, a random TCP port is chosen and loopback interface is used for the listening address;
meaning that it is only available to the local machine. If you want other machines to access the
NFS mount over local network, you need to specify the listening address and port using `--addr` flag.
Modifying files through NFS protocol requires VFS caching. Usually you will need to specify `--vfs-cache-mode`
in order to be able to write to the mountpoint (full is recommended). If you don't specify VFS cache mode,
the mount will be read-only.
To serve NFS over the network use following command:
rclone serve nfs remote: --addr 0.0.0.0:$PORT --vfs-cache-mode=full
We specify a specific port that we can use in the mount command:
To mount the server under Linux/macOS, use the following command:
mount -oport=$PORT,mountport=$PORT $HOSTNAME: path/to/mountpoint
Where `$PORT` is the same port number we used in the serve nfs command.
This feature is only available on Unix platforms.
## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects
that rclone uses into something which looks much more like a disk
filing system.
Cloud storage objects have lots of properties which aren't like disk
files - you can't extend them or write to the middle of them, so the
VFS layer has to deal with that. Because there is no one right way of
doing this there are various options explained below.
The VFS layer also implements a directory cache - this caches info
about files and directories (but not the data) in memory.
## VFS Directory Cache
Using the `--dir-cache-time` flag, you can control how long a
directory should be considered up to date and not refreshed from the
backend. Changes made through the VFS will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
However, changes made directly on the cloud storage by the web
interface or a different copy of rclone will only be picked up once
the directory cache expires if the backend configured does not support
polling for changes. If the backend supports polling, changes will be
picked up within the polling interval.
You can send a `SIGHUP` signal to rclone for it to flush all
directory caches, regardless of how old they are. Assuming only one
rclone instance is running, you can reset the cache like this:
kill -SIGHUP $(pidof rclone)
If you configure rclone with a [remote control](/rc) then you can use
rclone rc to flush the whole directory cache:
rclone rc vfs/forget
Or individual files or directories:
rclone rc vfs/forget file=path/to/file dir=path/to/dir
## VFS File Buffering
The `--buffer-size` flag determines the amount of memory,
that will be used to buffer data in advance.
Each open file will try to keep the specified amount of data in memory
at all times. The buffered data is bound to one open file and won't be
shared.
This flag is a upper limit for the used memory per open file. The
buffer will only use memory for data that is downloaded but not not
yet read. If the buffer is empty, only a small amount of memory will
be used.
The maximum memory used by rclone for buffering can be up to
`--buffer-size * open files`.
## VFS File Caching
These flags control the VFS file caching options. File caching is
necessary to make the VFS layer appear compatible with a normal file
system. It can be disabled at the cost of some compatibility.
For example you'll need to enable VFS caching if you want to read and
write simultaneously to a file. See below for more details.
Note that the VFS cache is separate from the cache backend and you may
find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
If run with `-vv` rclone will print the location of the file cache. The
files are stored in the user cache file area which is OS dependent but
can be controlled with `--cache-dir` or setting the appropriate
environment variable.
The cache has 4 different modes selected by `--vfs-cache-mode`.
The higher the cache mode the more compatible rclone becomes at the
cost of using disk space.
Note that files are written back to the remote only when they are
closed and if they haven't been accessed for `--vfs-write-back`
seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
files are likely to remain cached.
The `--vfs-cache-max-age` will evict files from the cache
after the set time since last access has passed. The default value of
1 hour will start evicting files from cache that haven't been accessed
for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0
and will wait for 1 more hour before evicting. Specify the time with
standard notation, s, m, h, d, w .
You **should not** run two copies of rclone using the same VFS cache
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
This can potentially cause data corruption if you do. You can work
around this by giving each rclone its own cache hierarchy with
`--cache-dir`. You don't need to worry about this if the remotes in
use don't overlap.
### --vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote and write
directly to the remote without caching anything on disk.
This will mean some operations are not possible
* Files can't be opened for both read AND write
* Files opened for write can't be seeked
* Existing files opened for write must have O_TRUNC set
* Files open for read with O_TRUNC will be opened write only
* Files open for write only will behave as if O_TRUNC was supplied
* Open modes O_APPEND, O_TRUNC are ignored
* If an upload fails it can't be retried
### --vfs-cache-mode minimal
This is very similar to "off" except that files opened for read AND
write will be buffered to disk. This means that files opened for
write will be a lot more compatible, but uses the minimal disk space.
These operations are not possible
* Files opened for write only can't be seeked
* Existing files opened for write must have O_TRUNC set
* Files opened for write only will ignore O_APPEND, O_TRUNC
* If an upload fails it can't be retried
### --vfs-cache-mode writes
In this mode files opened for read only are still read directly from
the remote, write only and read/write files are buffered to disk
first.
This mode should support all normal file system operations.
If an upload fails it will be retried at exponentially increasing
intervals up to 1 minute.
### --vfs-cache-mode full
In this mode all reads and writes are buffered to and from disk. When
data is read from the remote this is buffered to disk as well.
In this mode the files in the cache will be sparse files and rclone
will keep track of which bits of the files it has downloaded.
So if an application only reads the starts of each file, then rclone
will only buffer the start of the file. These files will appear to be
their full size in the cache, but they will be sparse files with only
the data that has been downloaded present in them.
This mode should support all normal file system operations and is
otherwise identical to `--vfs-cache-mode` writes.
When reading a file rclone will read `--buffer-size` plus
`--vfs-read-ahead` bytes ahead. The `--buffer-size` is buffered in memory
whereas the `--vfs-read-ahead` is buffered on disk.
When using this mode it is recommended that `--buffer-size` is not set
too large and `--vfs-read-ahead` is set large if required.
**IMPORTANT** not all file systems support sparse files. In particular
FAT/exFAT do not. Rclone will perform very badly if the cache
directory is on a filesystem which doesn't support sparse files and it
will log an ERROR message if one is detected.
### Fingerprinting
Various parts of the VFS use fingerprinting to see if a local file
copy has changed relative to a remote file. Fingerprints are made
from:
- size
- modification time
- hash
where available on an object.
On some backends some of these attributes are slow to read (they take
an extra API call per object, or extra work per object).
For example `hash` is slow with the `local` and `sftp` backends as
they have to read the entire file and hash it, and `modtime` is slow
with the `s3`, `swift`, `ftp` and `qinqstor` backends because they
need to do an extra API call to fetch it.
If you use the `--vfs-fast-fingerprint` flag then rclone will not
include the slow operations in the fingerprint. This makes the
fingerprinting less accurate but much faster and will improve the
opening time of cached files.
If you are running a vfs cache over `local`, `s3` or `swift` backends
then using this flag is recommended.
Note that if you change the value of this flag, the fingerprints of
the files in the cache may be invalidated and the files will need to
be downloaded again.
## VFS Chunked Reading
When rclone reads files from a remote it reads them in chunks. This
means that rather than requesting the whole file rclone reads the
chunk specified. This can reduce the used download quota for some
remotes by requesting only chunks from the remote that are actually
read, at the cost of an increased number of requests.
These flags control the chunking:
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off)
Rclone will start reading a chunk of size `--vfs-read-chunk-size`,
and then double the size for each read. When `--vfs-read-chunk-size-limit` is
specified, and greater than `--vfs-read-chunk-size`, the chunk size for each
open file will get doubled only until the specified value is reached. If the
value is "off", which is the default, the limit is disabled and the chunk size
will grow indefinitely.
With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0`
the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on.
When `--vfs-read-chunk-size-limit 500M` is specified, the result would be
0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading.
## VFS Performance
These flags may be used to enable/disable features of the VFS for
performance or other reasons. See also the [chunked reading](#vfs-chunked-reading)
feature.
In particular S3 and Swift benefit hugely from the `--no-modtime` flag
(or use `--use-server-modtime` for a slightly different effect) as each
read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--read-only Only allow read-only access.
Sometimes rclone is delivered reads or writes out of order. Rather
than seeking rclone will wait a short time for the in sequence read or
write to come in. These flags only come into effect when not using an
on disk cache file.
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
When using VFS write caching (`--vfs-cache-mode` with value writes or full),
the global flag `--transfers` can be set to adjust the number of parallel uploads of
modified files from the cache (the related global flag `--checkers` has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
## VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only
by case, and the exact case must be used when opening a file.
File systems in modern Windows are case-insensitive but case-preserving:
although existing files can be opened using any case, the exact case used
to create the file is preserved and available for programs to query.
It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default.
The `--vfs-case-insensitive` VFS flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the remote
as-is. If the flag is "true" (or appears without a value on the
command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case
different than what is stored on the remote. If an argument refers
to an existing file with exactly the same name, then the case of the existing
file on the disk will be used. However, if a file name with exactly the same
name is not found but a name differing only by case exists, rclone will
transparently fixup the name. This fixup happens only when an existing file
is requested. Case sensitivity of file names created anew by rclone is
controlled by the underlying remote.
Note that case sensitivity of the operating system running rclone (the target)
may differ from case sensitivity of a file system presented by rclone (the source).
The flag controls whether "fixup" is performed to satisfy the target.
If the flag is not provided on the command line, then its default value depends
on the operating system where rclone runs: "true" on Windows and macOS, "false"
otherwise. If the flag is provided without a value, then it is "true".
## VFS Disk Options
This flag allows you to manually set the statistics about the filing system.
It can be useful when those statistics cannot be read correctly automatically.
--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)
## Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used.
If you need this information to be available when running `df` on the
filesystem, then pass the flag `--vfs-used-is-size` to rclone.
With this flag set, instead of relying on the backend to report this
information, rclone will scan the whole remote similar to `rclone size`
and compute the total used space itself.
_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
```
rclone serve nfs remote:path [flags]
```
## Options
```
--addr string IPaddress:Port or :Port to bind server to
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for nfs
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
--poll-interval Duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Only allow read-only access
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
```
## Filter Options
Flags for filtering directory listings.
```
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
--exclude-if-present stringArray Exclude directories if filename is present
--files-from stringArray Read list of source-file names from file (use - to read from stdin)
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
--max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-depth int If set limits the recursion depth to this (default -1)
--max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
--metadata-exclude stringArray Exclude metadatas matching pattern
--metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin)
--metadata-filter stringArray Add a metadata filtering rule
--metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
--metadata-include stringArray Include metadatas matching pattern
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
```
See the [global flags page](/flags/) for global options not listed here.
# SEE ALSO
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.

View File

@ -71,8 +71,19 @@ Note that setting `disable_multipart_uploads = true` is to work around
## Bugs ## Bugs
When uploading multipart files `serve s3` holds all the parts in When uploading multipart files `serve s3` holds all the parts in
memory. This is a limitaton of the library rclone uses for serving S3 memory (see [#7453](https://github.com/rclone/rclone/issues/7453)).
and will hopefully be fixed at some point. This is a limitaton of the library rclone uses for serving S3 and will
hopefully be fixed at some point.
Multipart server side copies do not work (see
[#7454](https://github.com/rclone/rclone/issues/7454)). These take a
very long time and eventually fail. The default threshold for
multipart server side copies is 5G which is the maximum it can be, so
files above this side will fail to be server side copied.
For a current list of `serve s3` bugs see the [serve
s3](https://github.com/rclone/rclone/labels/serve%20s3) bug category
on GitHub.
## Limitations ## Limitations

View File

@ -65,7 +65,6 @@ used. Omitting "restrict" and using `--sftp-path-override` to enable
checksumming is possible but less secure and you could use the SFTP server checksumming is possible but less secure and you could use the SFTP server
provided by OpenSSH in this case. provided by OpenSSH in this case.
## VFS - Virtual File System ## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects This command uses the VFS layer. This adapts the cloud storage objects
@ -518,6 +517,7 @@ rclone serve sftp remote:path [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)

View File

@ -126,6 +126,17 @@ to be used within the template to server pages:
|-- .Size | Size in Bytes of the entry. | |-- .Size | Size in Bytes of the entry. |
|-- .ModTime | The UTC timestamp of an entry. | |-- .ModTime | The UTC timestamp of an entry. |
The server also makes the following functions available so that they can be used within the
template. These functions help extend the options for dynamic rendering of HTML. They can
be used to render HTML based on specific conditions.
| Function | Description |
| :---------- | :---------- |
| afterEpoch | Returns the time since the epoch for the given time. |
| contains | Checks whether a given substring is present or not in a given string. |
| hasPrefix | Checks whether the given string begins with the specified prefix. |
| hasSuffix | Checks whether the given string end with the specified suffix. |
### Authentication ### Authentication
By default this will serve files without needing a login. By default this will serve files without needing a login.
@ -152,7 +163,6 @@ The password file can be updated while rclone is running.
Use `--realm` to set the authentication realm. Use `--realm` to set the authentication realm.
Use `--salt` to change the password hashing salt from the default. Use `--salt` to change the password hashing salt from the default.
## VFS - Virtual File System ## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects This command uses the VFS layer. This adapts the cloud storage objects
@ -616,6 +626,7 @@ rclone serve webdav remote:path [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)

View File

@ -70,11 +70,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only). -c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison --compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
--cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing --ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums --ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination --ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files -I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified --immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename --inplace Download directly to destination file instead of atomic download to temp/rename
@ -89,11 +89,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless --no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy --no-traverse Don't traverse destination file system on copy
--no-update-modtime Don't update destination mod-time if files identical --no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending' --order-by string Instructions on how to order the transfers, e.g. 'size,descending'
--partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files --refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
--size-only Skip based on size only, not mod-time or checksum --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination -u, --update Skip files that are newer on the destination
``` ```

View File

@ -776,6 +776,31 @@ Properties:
- Type: bool - Type: bool
- Default: false - Default: false
#### --drive-show-all-gdocs
Show all Google Docs including non-exportable ones in listings.
If you try a server side copy on a Google Form without this flag, you
will get this error:
No export formats found for "application/vnd.google-apps.form"
However adding this flag will allow the form to be server side copied.
Note that rclone doesn't add extensions to the Google Docs file names
in this mode.
Do **not** use this flag when trying to download Google Docs - rclone
will fail to download them.
Properties:
- Config: show_all_gdocs
- Env Var: RCLONE_DRIVE_SHOW_ALL_GDOCS
- Type: bool
- Default: false
#### --drive-skip-checksum-gphotos #### --drive-skip-checksum-gphotos
Skip checksums on Google photos and videos only. Skip checksums on Google photos and videos only.
@ -1238,6 +1263,98 @@ Properties:
- Type: bool - Type: bool
- Default: true - Default: true
#### --drive-metadata-owner
Control whether owner should be read or written in metadata.
Owner is a standard part of the file metadata so is easy to read. But it
isn't always desirable to set the owner from the metadata.
Note that you can't set the owner on Shared Drives, and that setting
ownership will generate an email to the new owner (this can't be
disabled), and you can't transfer ownership to someone outside your
organization.
Properties:
- Config: metadata_owner
- Env Var: RCLONE_DRIVE_METADATA_OWNER
- Type: Bits
- Default: read
- Examples:
- "off"
- Do not read or write the value
- "read"
- Read the value only
- "write"
- Write the value only
- "read,write"
- Read and Write the value.
#### --drive-metadata-permissions
Control whether permissions should be read or written in metadata.
Reading permissions metadata from files can be done quickly, but it
isn't always desirable to set the permissions from the metadata.
Note that rclone drops any inherited permissions on Shared Drives and
any owner permission on My Drives as these are duplicated in the owner
metadata.
Properties:
- Config: metadata_permissions
- Env Var: RCLONE_DRIVE_METADATA_PERMISSIONS
- Type: Bits
- Default: off
- Examples:
- "off"
- Do not read or write the value
- "read"
- Read the value only
- "write"
- Write the value only
- "read,write"
- Read and Write the value.
#### --drive-metadata-labels
Control whether labels should be read or written in metadata.
Reading labels metadata from files takes an extra API transaction and
will slow down listings. It isn't always desirable to set the labels
from the metadata.
The format of labels is documented in the drive API documentation at
https://developers.google.com/drive/api/reference/rest/v3/Label -
rclone just provides a JSON dump of this format.
When setting labels, the label and fields must already exist - rclone
will not create them. This means that if you are transferring labels
from two different accounts you will have to create the labels in
advance and use the metadata mapper to translate the IDs between the
two accounts.
Properties:
- Config: metadata_labels
- Env Var: RCLONE_DRIVE_METADATA_LABELS
- Type: Bits
- Default: off
- Examples:
- "off"
- Do not read or write the value
- "read"
- Read the value only
- "write"
- Write the value only
- "read,write"
- Read and Write the value.
#### --drive-encoding #### --drive-encoding
The encoding for the backend. The encoding for the backend.
@ -1248,7 +1365,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_DRIVE_ENCODING - Env Var: RCLONE_DRIVE_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: InvalidUtf8 - Default: InvalidUtf8
#### --drive-env-auth #### --drive-env-auth
@ -1269,6 +1386,29 @@ Properties:
- "true" - "true"
- Get GCP IAM credentials from the environment (env vars or IAM). - Get GCP IAM credentials from the environment (env vars or IAM).
### Metadata
User metadata is stored in the properties field of the drive object.
Here are the possible system metadata items for the drive backend.
| Name | Help | Type | Example | Read Only |
|------|------|------|---------|-----------|
| btime | Time of file birth (creation) with mS accuracy. Note that this is only writable on fresh uploads - it can't be written for updates. | RFC 3339 | 2006-01-02T15:04:05.999Z07:00 | N |
| content-type | The MIME type of the file. | string | text/plain | N |
| copy-requires-writer-permission | Whether the options to copy, print, or download this file, should be disabled for readers and commenters. | boolean | true | N |
| description | A short description of the file. | string | Contract for signing | N |
| folder-color-rgb | The color for a folder or a shortcut to a folder as an RGB hex string. | string | 881133 | N |
| labels | Labels attached to this file in a JSON dump of Googled drive format. Enable with --drive-metadata-labels. | JSON | [] | N |
| mtime | Time of last modification with mS accuracy. | RFC 3339 | 2006-01-02T15:04:05.999Z07:00 | N |
| owner | The owner of the file. Usually an email address. Enable with --drive-metadata-owner. | string | user@example.com | N |
| permissions | Permissions in a JSON dump of Google drive format. On shared drives these will only be present if they aren't inherited. Enable with --drive-metadata-permissions. | JSON | {} | N |
| starred | Whether the user has starred the file. | boolean | false | N |
| viewed-by-me | Whether the file has been viewed by this user. | boolean | true | **Y** |
| writers-can-share | Whether users with only writer permission can modify the file's permissions. Not populated for items in shared drives. | boolean | false | N |
See the [metadata](/docs/#metadata) docs for more info.
## Backend commands ## Backend commands
Here are the commands specific to the drive backend. Here are the commands specific to the drive backend.

View File

@ -343,6 +343,30 @@ Properties:
- Type: bool - Type: bool
- Default: false - Default: false
#### --dropbox-pacer-min-sleep
Minimum time to sleep between API calls.
Properties:
- Config: pacer_min_sleep
- Env Var: RCLONE_DROPBOX_PACER_MIN_SLEEP
- Type: Duration
- Default: 10ms
#### --dropbox-encoding
The encoding for the backend.
See the [encoding section in the overview](/overview/#encoding) for more info.
Properties:
- Config: encoding
- Env Var: RCLONE_DROPBOX_ENCODING
- Type: Encoding
- Default: Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot
#### --dropbox-batch-mode #### --dropbox-batch-mode
Upload file batching sync|async|off. Upload file batching sync|async|off.
@ -429,30 +453,6 @@ Properties:
- Type: Duration - Type: Duration
- Default: 10m0s - Default: 10m0s
#### --dropbox-pacer-min-sleep
Minimum time to sleep between API calls.
Properties:
- Config: pacer_min_sleep
- Env Var: RCLONE_DROPBOX_PACER_MIN_SLEEP
- Type: Duration
- Default: 10ms
#### --dropbox-encoding
The encoding for the backend.
See the [encoding section in the overview](/overview/#encoding) for more info.
Properties:
- Config: encoding
- Env Var: RCLONE_DROPBOX_ENCODING
- Type: MultiEncoder
- Default: Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}
## Limitations ## Limitations

View File

@ -192,7 +192,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_FICHIER_ENCODING - Env Var: RCLONE_FICHIER_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot - Default: Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}

View File

@ -271,7 +271,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_FILEFABRIC_ENCODING - Env Var: RCLONE_FILEFABRIC_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,Del,Ctl,InvalidUtf8,Dot - Default: Slash,Del,Ctl,InvalidUtf8,Dot
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}

View File

@ -18,11 +18,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only). -c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison --compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
--cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing --ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums --ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination --ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files -I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified --immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename --inplace Download directly to destination file instead of atomic download to temp/rename
@ -37,11 +37,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless --no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy --no-traverse Don't traverse destination file system on copy
--no-update-modtime Don't update destination mod-time if files identical --no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending' --order-by string Instructions on how to order the transfers, e.g. 'size,descending'
--partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files --refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
--size-only Skip based on size only, not mod-time or checksum --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination -u, --update Skip files that are newer on the destination
``` ```
@ -111,7 +112,7 @@ General networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this --tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar --use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.64.0") --user-agent string Set the user-agent to a specified string (default "rclone/v1.65.0")
``` ```
@ -134,7 +135,7 @@ General configuration of rclone.
--ask-password Allow prompt for password for encrypted configuration (default true) --ask-password Allow prompt for password for encrypted configuration (default true)
--auto-confirm If enabled, do not request console confirmation --auto-confirm If enabled, do not request console confirmation
--cache-dir string Directory rclone will use for caching (default "$HOME/.cache/rclone") --cache-dir string Directory rclone will use for caching (default "$HOME/.cache/rclone")
--color string When to show colors (and other ANSI codes) AUTO|NEVER|ALWAYS (default "AUTO") --color AUTO|NEVER|ALWAYS When to show colors (and other ANSI codes) AUTO|NEVER|ALWAYS (default AUTO)
--config string Config file (default "$HOME/.config/rclone/rclone.conf") --config string Config file (default "$HOME/.config/rclone/rclone.conf")
--default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--disable string Disable a comma separated list of features (use --disable help to see a list) --disable string Disable a comma separated list of features (use --disable help to see a list)
@ -163,7 +164,7 @@ Flags for developers.
``` ```
--cpuprofile string Write cpu profile to file --cpuprofile string Write cpu profile to file
--dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump DumpFlags List of items to dump from: headers, bodies, requests, responses, auth, filters, goroutines, openfiles, mapper
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info --dump-headers Dump HTTP headers - may contain sensitive info
--memprofile string Write memory profile to file --memprofile string Write memory profile to file
@ -217,7 +218,7 @@ Logging and statistics.
``` ```
--log-file string Log everything to this file --log-file string Log everything to this file
--log-format string Comma separated list of log format options (default "date,time") --log-format string Comma separated list of log format options (default "date,time")
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --log-level LogLevel Log level DEBUG|INFO|NOTICE|ERROR (default NOTICE)
--log-systemd Activate systemd integration for the logger --log-systemd Activate systemd integration for the logger
--max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000) --max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000)
-P, --progress Show progress during transfer -P, --progress Show progress during transfer
@ -225,7 +226,7 @@ Logging and statistics.
-q, --quiet Print as little stuff as possible -q, --quiet Print as little stuff as possible
--stats Duration Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s) --stats Duration Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats (0 for no limit) (default 45) --stats-file-name-length int Max file name length in stats (0 for no limit) (default 45)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-log-level LogLevel Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default INFO)
--stats-one-line Make the stats fit on one line --stats-one-line Make the stats fit on one line
--stats-one-line-date Enable --stats-one-line and add current date/time prefix --stats-one-line-date Enable --stats-one-line and add current date/time prefix
--stats-one-line-date-format string Enable --stats-one-line-date and use custom formatted date: Enclose date string in double quotes ("), see https://golang.org/pkg/time/#Time.Format --stats-one-line-date-format string Enable --stats-one-line-date and use custom formatted date: Enclose date string in double quotes ("), see https://golang.org/pkg/time/#Time.Format
@ -249,6 +250,7 @@ Flags to control metadata.
--metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
--metadata-include stringArray Include metadatas matching pattern --metadata-include stringArray Include metadatas matching pattern
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
--metadata-mapper SpaceSepList Program to run to transforming metadata before upload
--metadata-set stringArray Add metadata key=value when uploading --metadata-set stringArray Add metadata key=value when uploading
``` ```
@ -297,13 +299,13 @@ Backend only flags. These can be set in the config file also.
--acd-auth-url string Auth server URL --acd-auth-url string Auth server URL
--acd-client-id string OAuth Client Id --acd-client-id string OAuth Client Id
--acd-client-secret string OAuth Client Secret --acd-client-secret string OAuth Client Secret
--acd-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot) --acd-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink (default 9Gi) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink (default 9Gi)
--acd-token string OAuth Access Token as a JSON blob --acd-token string OAuth Access Token as a JSON blob
--acd-token-url string Token server url --acd-token-url string Token server url
--acd-upload-wait-per-gb Duration Additional time per GiB to wait after a failed complete upload to see if it appears (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GiB to wait after a failed complete upload to see if it appears (default 3m0s)
--alias-remote string Remote or path to alias --alias-remote string Remote or path to alias
--azureblob-access-tier string Access tier of blob: hot, cool or archive --azureblob-access-tier string Access tier of blob: hot, cool, cold or archive
--azureblob-account string Azure Storage Account Name --azureblob-account string Azure Storage Account Name
--azureblob-archive-tier-delete Delete archive tier blobs before overwriting --azureblob-archive-tier-delete Delete archive tier blobs before overwriting
--azureblob-chunk-size SizeSuffix Upload chunk size (default 4Mi) --azureblob-chunk-size SizeSuffix Upload chunk size (default 4Mi)
@ -314,7 +316,7 @@ Backend only flags. These can be set in the config file also.
--azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth --azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth
--azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created --azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created
--azureblob-disable-checksum Don't store MD5 checksum with object metadata --azureblob-disable-checksum Don't store MD5 checksum with object metadata
--azureblob-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8) --azureblob-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-env-auth Read credentials from runtime (environment variables, CLI or MSI) --azureblob-env-auth Read credentials from runtime (environment variables, CLI or MSI)
--azureblob-key string Storage Account Shared Key --azureblob-key string Storage Account Shared Key
@ -334,18 +336,43 @@ Backend only flags. These can be set in the config file also.
--azureblob-use-emulator Uses local storage emulator if provided as 'true' --azureblob-use-emulator Uses local storage emulator if provided as 'true'
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure) --azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--azureblob-username string User name (usually an email address) --azureblob-username string User name (usually an email address)
--azurefiles-account string Azure Storage Account Name
--azurefiles-chunk-size SizeSuffix Upload chunk size (default 4Mi)
--azurefiles-client-certificate-password string Password for the certificate file (optional) (obscured)
--azurefiles-client-certificate-path string Path to a PEM or PKCS12 certificate file including the private key
--azurefiles-client-id string The ID of the client in use
--azurefiles-client-secret string One of the service principal's client secrets
--azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth
--azurefiles-connection-string string Azure Files Connection String
--azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot)
--azurefiles-endpoint string Endpoint for the service
--azurefiles-env-auth Read credentials from runtime (environment variables, CLI or MSI)
--azurefiles-key string Storage Account Shared Key
--azurefiles-max-stream-size SizeSuffix Max size for streamed files (default 10Gi)
--azurefiles-msi-client-id string Object ID of the user-assigned MSI to use, if any
--azurefiles-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any
--azurefiles-msi-object-id string Object ID of the user-assigned MSI to use, if any
--azurefiles-password string The user's password (obscured)
--azurefiles-sas-url string SAS URL
--azurefiles-service-principal-file string Path to file containing credentials for use with a service principal
--azurefiles-share-name string Azure Files Share Name
--azurefiles-tenant string ID of the service principal's tenant. Also called its directory ID
--azurefiles-upload-concurrency int Concurrency for multipart uploads (default 16)
--azurefiles-use-msi Use a managed service identity to authenticate (only works in Azure)
--azurefiles-username string User name (usually an email address)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size (default 96Mi) --b2-chunk-size SizeSuffix Upload chunk size (default 96Mi)
--b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4Gi) --b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4Gi)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d (default 1w) --b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d (default 1w)
--b2-download-url string Custom endpoint for downloads --b2-download-url string Custom endpoint for downloads
--b2-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --b2-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--b2-endpoint string Endpoint for the service --b2-endpoint string Endpoint for the service
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files --b2-hard-delete Permanently delete files on remote removal, otherwise hide files
--b2-key string Application Key --b2-key string Application Key
--b2-lifecycle int Set the number of days deleted files should be kept when creating a bucket
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging
--b2-upload-concurrency int Concurrency for multipart uploads (default 16) --b2-upload-concurrency int Concurrency for multipart uploads (default 4)
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--b2-version-at Time Show file versions as they were at the specified time (default off) --b2-version-at Time Show file versions as they were at the specified time (default off)
--b2-versions Include old versions in directory listings --b2-versions Include old versions in directory listings
@ -356,7 +383,7 @@ Backend only flags. These can be set in the config file also.
--box-client-id string OAuth Client Id --box-client-id string OAuth Client Id
--box-client-secret string OAuth Client Secret --box-client-secret string OAuth Client Secret
--box-commit-retries int Max number of times to try committing a multipart file (default 100) --box-commit-retries int Max number of times to try committing a multipart file (default 100)
--box-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot) --box-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
--box-impersonate string Impersonate this user ID when using a service account --box-impersonate string Impersonate this user ID when using a service account
--box-list-chunk int Size of listing chunk 1-1000 (default 1000) --box-list-chunk int Size of listing chunk 1-1000 (default 1000)
--box-owned-by string Only show items owned by the login (email address) passed in --box-owned-by string Only show items owned by the login (email address) passed in
@ -414,7 +441,7 @@ Backend only flags. These can be set in the config file also.
--drive-client-secret string OAuth Client Secret --drive-client-secret string OAuth Client Secret
--drive-copy-shortcut-content Server side copy contents of shortcuts instead of the shortcut --drive-copy-shortcut-content Server side copy contents of shortcuts instead of the shortcut
--drive-disable-http2 Disable drive using http2 (default true) --drive-disable-http2 Disable drive using http2 (default true)
--drive-encoding MultiEncoder The encoding for the backend (default InvalidUtf8) --drive-encoding Encoding The encoding for the backend (default InvalidUtf8)
--drive-env-auth Get IAM credentials from runtime (environment variables or instance meta data if no env vars) --drive-env-auth Get IAM credentials from runtime (environment variables or instance meta data if no env vars)
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs (default "docx,xlsx,pptx,svg") --drive-export-formats string Comma separated list of preferred formats for downloading Google docs (default "docx,xlsx,pptx,svg")
--drive-fast-list-bug-fix Work around a bug in Google Drive listing (default true) --drive-fast-list-bug-fix Work around a bug in Google Drive listing (default true)
@ -423,17 +450,21 @@ Backend only flags. These can be set in the config file also.
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs --drive-import-formats string Comma separated list of preferred formats for uploading Google docs
--drive-keep-revision-forever Keep new head revision of each file forever --drive-keep-revision-forever Keep new head revision of each file forever
--drive-list-chunk int Size of listing chunk 100-1000, 0 to disable (default 1000) --drive-list-chunk int Size of listing chunk 100-1000, 0 to disable (default 1000)
--drive-metadata-labels Bits Control whether labels should be read or written in metadata (default off)
--drive-metadata-owner Bits Control whether owner should be read or written in metadata (default read)
--drive-metadata-permissions Bits Control whether permissions should be read or written in metadata (default off)
--drive-pacer-burst int Number of API calls to allow without sleeping (default 100) --drive-pacer-burst int Number of API calls to allow without sleeping (default 100)
--drive-pacer-min-sleep Duration Minimum time to sleep between API calls (default 100ms) --drive-pacer-min-sleep Duration Minimum time to sleep between API calls (default 100ms)
--drive-resource-key string Resource key for accessing a link-shared file --drive-resource-key string Resource key for accessing a link-shared file
--drive-root-folder-id string ID of the root folder --drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive --drive-scope string Comma separated list of scopes that rclone should use when requesting access from drive
--drive-server-side-across-configs Deprecated: use --server-side-across-configs instead --drive-server-side-across-configs Deprecated: use --server-side-across-configs instead
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path --drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me --drive-shared-with-me Only show files that are shared with me
--drive-show-all-gdocs Show all Google Docs including non-exportable ones in listings
--drive-size-as-quota Show sizes as storage quota usage, not actual size --drive-size-as-quota Show sizes as storage quota usage, not actual size
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only --drive-skip-checksum-gphotos Skip checksums on Google photos and videos only
--drive-skip-dangling-shortcuts If set skip dangling shortcut files --drive-skip-dangling-shortcuts If set skip dangling shortcut files
--drive-skip-gdocs Skip google documents in all listings --drive-skip-gdocs Skip google documents in all listings
--drive-skip-shortcuts If set skip shortcut files --drive-skip-shortcuts If set skip shortcut files
@ -457,7 +488,7 @@ Backend only flags. These can be set in the config file also.
--dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi) --dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi)
--dropbox-client-id string OAuth Client Id --dropbox-client-id string OAuth Client Id
--dropbox-client-secret string OAuth Client Secret --dropbox-client-secret string OAuth Client Secret
--dropbox-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot) --dropbox-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
--dropbox-impersonate string Impersonate this user when using a business account --dropbox-impersonate string Impersonate this user when using a business account
--dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms) --dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms)
--dropbox-shared-files Instructs rclone to work on individual shared files --dropbox-shared-files Instructs rclone to work on individual shared files
@ -466,11 +497,11 @@ Backend only flags. These can be set in the config file also.
--dropbox-token-url string Token server url --dropbox-token-url string Token server url
--fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl --fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
--fichier-cdn Set if you wish to use CDN download links --fichier-cdn Set if you wish to use CDN download links
--fichier-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot) --fichier-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
--fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured) --fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured)
--fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured) --fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured)
--fichier-shared-folder string If you want to download a shared folder, add this parameter --fichier-shared-folder string If you want to download a shared folder, add this parameter
--filefabric-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot) --filefabric-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
--filefabric-permanent-token string Permanent Authentication Token --filefabric-permanent-token string Permanent Authentication Token
--filefabric-root-folder-id string ID of the root folder --filefabric-root-folder-id string ID of the root folder
--filefabric-token string Session Token --filefabric-token string Session Token
@ -484,7 +515,7 @@ Backend only flags. These can be set in the config file also.
--ftp-disable-mlsd Disable using MLSD even if server advertises support --ftp-disable-mlsd Disable using MLSD even if server advertises support
--ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS) --ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS)
--ftp-disable-utf8 Disable using UTF-8 even if server advertises support --ftp-disable-utf8 Disable using UTF-8 even if server advertises support
--ftp-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot) --ftp-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS) --ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
--ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD --ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD
--ftp-host string FTP host to connect to --ftp-host string FTP host to connect to
@ -506,7 +537,7 @@ Backend only flags. These can be set in the config file also.
--gcs-client-secret string OAuth Client Secret --gcs-client-secret string OAuth Client Secret
--gcs-decompress If set this will decompress gzip encoded objects --gcs-decompress If set this will decompress gzip encoded objects
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created --gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
--gcs-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-endpoint string Endpoint for the service --gcs-endpoint string Endpoint for the service
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars) --gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets --gcs-location string Location for the newly created buckets
@ -519,9 +550,13 @@ Backend only flags. These can be set in the config file also.
--gcs-token-url string Token server url --gcs-token-url string Token server url
--gcs-user-project string User project --gcs-user-project string User project
--gphotos-auth-url string Auth server URL --gphotos-auth-url string Auth server URL
--gphotos-batch-commit-timeout Duration Max time to wait for a batch to finish committing (default 10m0s)
--gphotos-batch-mode string Upload file batching sync|async|off (default "sync")
--gphotos-batch-size int Max number of files in upload batch
--gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
--gphotos-client-id string OAuth Client Id --gphotos-client-id string OAuth Client Id
--gphotos-client-secret string OAuth Client Secret --gphotos-client-secret string OAuth Client Secret
--gphotos-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gphotos-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gphotos-include-archived Also view and download archived media --gphotos-include-archived Also view and download archived media
--gphotos-read-only Set to make the Google Photos backend read only --gphotos-read-only Set to make the Google Photos backend read only
--gphotos-read-size Set to read the size of media items --gphotos-read-size Set to read the size of media items
@ -533,8 +568,8 @@ Backend only flags. These can be set in the config file also.
--hasher-max-age Duration Maximum time to keep checksums in cache (0 = no cache, off = cache forever) (default off) --hasher-max-age Duration Maximum time to keep checksums in cache (0 = no cache, off = cache forever) (default off)
--hasher-remote string Remote to cache checksums for (e.g. myRemote:path) --hasher-remote string Remote to cache checksums for (e.g. myRemote:path)
--hdfs-data-transfer-protection string Kerberos data transfer protection: authentication|integrity|privacy --hdfs-data-transfer-protection string Kerberos data transfer protection: authentication|integrity|privacy
--hdfs-encoding MultiEncoder The encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot) --hdfs-encoding Encoding The encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot)
--hdfs-namenode string Hadoop name node and port --hdfs-namenode CommaSepList Hadoop name nodes and ports
--hdfs-service-principal-name string Kerberos service principal name for the namenode --hdfs-service-principal-name string Kerberos service principal name for the namenode
--hdfs-username string Hadoop user name --hdfs-username string Hadoop user name
--hidrive-auth-url string Auth server URL --hidrive-auth-url string Auth server URL
@ -542,7 +577,7 @@ Backend only flags. These can be set in the config file also.
--hidrive-client-id string OAuth Client Id --hidrive-client-id string OAuth Client Id
--hidrive-client-secret string OAuth Client Secret --hidrive-client-secret string OAuth Client Secret
--hidrive-disable-fetching-member-count Do not fetch number of objects in directories unless it is absolutely necessary --hidrive-disable-fetching-member-count Do not fetch number of objects in directories unless it is absolutely necessary
--hidrive-encoding MultiEncoder The encoding for the backend (default Slash,Dot) --hidrive-encoding Encoding The encoding for the backend (default Slash,Dot)
--hidrive-endpoint string Endpoint for the service (default "https://api.hidrive.strato.com/2.1") --hidrive-endpoint string Endpoint for the service (default "https://api.hidrive.strato.com/2.1")
--hidrive-root-prefix string The root/parent folder for all paths (default "/") --hidrive-root-prefix string The root/parent folder for all paths (default "/")
--hidrive-scope-access string Access permissions that rclone should use when requesting access from HiDrive (default "rw") --hidrive-scope-access string Access permissions that rclone should use when requesting access from HiDrive (default "rw")
@ -555,9 +590,16 @@ Backend only flags. These can be set in the config file also.
--http-no-head Don't use HEAD requests --http-no-head Don't use HEAD requests
--http-no-slash Set this if the site doesn't end directories with / --http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of HTTP host to connect to --http-url string URL of HTTP host to connect to
--imagekit-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket)
--imagekit-endpoint string You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
--imagekit-only-signed Restrict unsigned image URLs If you have configured Restrict unsigned image URLs in your dashboard settings, set this to true
--imagekit-private-key string You can find your ImageKit.io private key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
--imagekit-public-key string You can find your ImageKit.io public key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
--imagekit-upload-tags string Tags to add to the uploaded files, e.g. "tag1,tag2"
--imagekit-versions Include old versions in directory listings
--internetarchive-access-key-id string IAS3 Access Key --internetarchive-access-key-id string IAS3 Access Key
--internetarchive-disable-checksum Don't ask the server to test against MD5 checksum calculated by rclone (default true) --internetarchive-disable-checksum Don't ask the server to test against MD5 checksum calculated by rclone (default true)
--internetarchive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot) --internetarchive-encoding Encoding The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
--internetarchive-endpoint string IAS3 Endpoint (default "https://s3.us.archive.org") --internetarchive-endpoint string IAS3 Endpoint (default "https://s3.us.archive.org")
--internetarchive-front-endpoint string Host of InternetArchive Frontend (default "https://archive.org") --internetarchive-front-endpoint string Host of InternetArchive Frontend (default "https://archive.org")
--internetarchive-secret-access-key string IAS3 Secret Key (password) --internetarchive-secret-access-key string IAS3 Secret Key (password)
@ -565,7 +607,7 @@ Backend only flags. These can be set in the config file also.
--jottacloud-auth-url string Auth server URL --jottacloud-auth-url string Auth server URL
--jottacloud-client-id string OAuth Client Id --jottacloud-client-id string OAuth Client Id
--jottacloud-client-secret string OAuth Client Secret --jottacloud-client-secret string OAuth Client Secret
--jottacloud-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot) --jottacloud-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash --jottacloud-hard-delete Delete files permanently rather than putting them into the trash
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi)
--jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them --jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them
@ -573,17 +615,18 @@ Backend only flags. These can be set in the config file also.
--jottacloud-token-url string Token server url --jottacloud-token-url string Token server url
--jottacloud-trashed-only Only show files that are in the trash --jottacloud-trashed-only Only show files that are in the trash
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's (default 10Mi) --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's (default 10Mi)
--koofr-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --koofr-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--koofr-endpoint string The Koofr API endpoint to use --koofr-endpoint string The Koofr API endpoint to use
--koofr-mountid string Mount ID of the mount to use --koofr-mountid string Mount ID of the mount to use
--koofr-password string Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) (obscured) --koofr-password string Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) (obscured)
--koofr-provider string Choose your storage provider --koofr-provider string Choose your storage provider
--koofr-setmtime Does the backend support setting modification time (default true) --koofr-setmtime Does the backend support setting modification time (default true)
--koofr-user string Your user name --koofr-user string Your user name
--linkbox-token string Token from https://www.linkbox.to/admin/account
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-case-insensitive Force the filesystem to report itself as case insensitive --local-case-insensitive Force the filesystem to report itself as case insensitive
--local-case-sensitive Force the filesystem to report itself as case sensitive --local-case-sensitive Force the filesystem to report itself as case sensitive
--local-encoding MultiEncoder The encoding for the backend (default Slash,Dot) --local-encoding Encoding The encoding for the backend (default Slash,Dot)
--local-no-check-updated Don't check to see if the files change during upload --local-no-check-updated Don't check to see if the files change during upload
--local-no-preallocate Disable preallocation of disk space for transferred files --local-no-preallocate Disable preallocation of disk space for transferred files
--local-no-set-modtime Disable setting modtime --local-no-set-modtime Disable setting modtime
@ -595,7 +638,7 @@ Backend only flags. These can be set in the config file also.
--mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true) --mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true)
--mailru-client-id string OAuth Client Id --mailru-client-id string OAuth Client Id
--mailru-client-secret string OAuth Client Secret --mailru-client-secret string OAuth Client Secret
--mailru-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot) --mailru-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--mailru-pass string Password (obscured) --mailru-pass string Password (obscured)
--mailru-speedup-enable Skip full upload if there is another file with same data hash (default true) --mailru-speedup-enable Skip full upload if there is another file with same data hash (default true)
--mailru-speedup-file-patterns string Comma separated list of file name patterns eligible for speedup (put by hash) (default "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf") --mailru-speedup-file-patterns string Comma separated list of file name patterns eligible for speedup (put by hash) (default "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf")
@ -605,7 +648,7 @@ Backend only flags. These can be set in the config file also.
--mailru-token-url string Token server url --mailru-token-url string Token server url
--mailru-user string User name (usually email) --mailru-user string User name (usually email)
--mega-debug Output more debug from Mega --mega-debug Output more debug from Mega
--mega-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot) --mega-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--mega-hard-delete Delete files permanently rather than putting them into the trash --mega-hard-delete Delete files permanently rather than putting them into the trash
--mega-pass string Password (obscured) --mega-pass string Password (obscured)
--mega-use-https Use HTTPS for transfers --mega-use-https Use HTTPS for transfers
@ -621,9 +664,10 @@ Backend only flags. These can be set in the config file also.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi)
--onedrive-client-id string OAuth Client Id --onedrive-client-id string OAuth Client Id
--onedrive-client-secret string OAuth Client Secret --onedrive-client-secret string OAuth Client Secret
--onedrive-delta If set rclone will use delta listing to implement recursive listings
--onedrive-drive-id string The ID of the drive to use --onedrive-drive-id string The ID of the drive to use
--onedrive-drive-type string The type of the drive (personal | business | documentLibrary) --onedrive-drive-type string The type of the drive (personal | business | documentLibrary)
--onedrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot) --onedrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings
--onedrive-hash-type string Specify the hash in use for the backend (default "auto") --onedrive-hash-type string Specify the hash in use for the backend (default "auto")
--onedrive-link-password string Set the password for links created by the link command --onedrive-link-password string Set the password for links created by the link command
@ -644,7 +688,7 @@ Backend only flags. These can be set in the config file also.
--oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi) --oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
--oos-copy-timeout Duration Timeout for copy (default 1m0s) --oos-copy-timeout Duration Timeout for copy (default 1m0s)
--oos-disable-checksum Don't store MD5 checksum with object metadata --oos-disable-checksum Don't store MD5 checksum with object metadata
--oos-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot) --oos-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--oos-endpoint string Endpoint for Object storage API --oos-endpoint string Endpoint for Object storage API
--oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery --oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery
--oos-max-upload-parts int Maximum number of parts in a multipart upload (default 10000) --oos-max-upload-parts int Maximum number of parts in a multipart upload (default 10000)
@ -661,13 +705,13 @@ Backend only flags. These can be set in the config file also.
--oos-upload-concurrency int Concurrency for multipart uploads (default 10) --oos-upload-concurrency int Concurrency for multipart uploads (default 10)
--oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi) --opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
--opendrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot) --opendrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
--opendrive-password string Password (obscured) --opendrive-password string Password (obscured)
--opendrive-username string Username --opendrive-username string Username
--pcloud-auth-url string Auth server URL --pcloud-auth-url string Auth server URL
--pcloud-client-id string OAuth Client Id --pcloud-client-id string OAuth Client Id
--pcloud-client-secret string OAuth Client Secret --pcloud-client-secret string OAuth Client Secret
--pcloud-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --pcloud-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--pcloud-hostname string Hostname to connect to (default "api.pcloud.com") --pcloud-hostname string Hostname to connect to (default "api.pcloud.com")
--pcloud-password string Your pcloud password (obscured) --pcloud-password string Your pcloud password (obscured)
--pcloud-root-folder-id string Fill in for rclone to use a non root folder as its starting point (default "d0") --pcloud-root-folder-id string Fill in for rclone to use a non root folder as its starting point (default "d0")
@ -677,7 +721,7 @@ Backend only flags. These can be set in the config file also.
--pikpak-auth-url string Auth server URL --pikpak-auth-url string Auth server URL
--pikpak-client-id string OAuth Client Id --pikpak-client-id string OAuth Client Id
--pikpak-client-secret string OAuth Client Secret --pikpak-client-secret string OAuth Client Secret
--pikpak-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot) --pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
--pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi) --pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
--pikpak-pass string Pikpak password (obscured) --pikpak-pass string Pikpak password (obscured)
--pikpak-root-folder-id string ID of the root folder --pikpak-root-folder-id string ID of the root folder
@ -689,13 +733,13 @@ Backend only flags. These can be set in the config file also.
--premiumizeme-auth-url string Auth server URL --premiumizeme-auth-url string Auth server URL
--premiumizeme-client-id string OAuth Client Id --premiumizeme-client-id string OAuth Client Id
--premiumizeme-client-secret string OAuth Client Secret --premiumizeme-client-secret string OAuth Client Secret
--premiumizeme-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot) --premiumizeme-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--premiumizeme-token string OAuth Access Token as a JSON blob --premiumizeme-token string OAuth Access Token as a JSON blob
--premiumizeme-token-url string Token server url --premiumizeme-token-url string Token server url
--protondrive-2fa string The 2FA code --protondrive-2fa string The 2FA code
--protondrive-app-version string The app version string (default "macos-drive@1.0.0-alpha.1+rclone") --protondrive-app-version string The app version string (default "macos-drive@1.0.0-alpha.1+rclone")
--protondrive-enable-caching Caches the files and folders metadata to reduce API calls (default true) --protondrive-enable-caching Caches the files and folders metadata to reduce API calls (default true)
--protondrive-encoding MultiEncoder The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot) --protondrive-encoding Encoding The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot)
--protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured) --protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured)
--protondrive-original-file-size Return the file size before encryption (default true) --protondrive-original-file-size Return the file size before encryption (default true)
--protondrive-password string The password of your proton account (obscured) --protondrive-password string The password of your proton account (obscured)
@ -704,13 +748,13 @@ Backend only flags. These can be set in the config file also.
--putio-auth-url string Auth server URL --putio-auth-url string Auth server URL
--putio-client-id string OAuth Client Id --putio-client-id string OAuth Client Id
--putio-client-secret string OAuth Client Secret --putio-client-secret string OAuth Client Secret
--putio-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --putio-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--putio-token string OAuth Access Token as a JSON blob --putio-token string OAuth Access Token as a JSON blob
--putio-token-url string Token server url --putio-token-url string Token server url
--qingstor-access-key-id string QingStor Access Key ID --qingstor-access-key-id string QingStor Access Key ID
--qingstor-chunk-size SizeSuffix Chunk size to use for uploading (default 4Mi) --qingstor-chunk-size SizeSuffix Chunk size to use for uploading (default 4Mi)
--qingstor-connection-retries int Number of connection retries (default 3) --qingstor-connection-retries int Number of connection retries (default 3)
--qingstor-encoding MultiEncoder The encoding for the backend (default Slash,Ctl,InvalidUtf8) --qingstor-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8)
--qingstor-endpoint string Enter an endpoint URL to connection QingStor API --qingstor-endpoint string Enter an endpoint URL to connection QingStor API
--qingstor-env-auth Get QingStor credentials from runtime --qingstor-env-auth Get QingStor credentials from runtime
--qingstor-secret-access-key string QingStor Secret Access Key (password) --qingstor-secret-access-key string QingStor Secret Access Key (password)
@ -719,7 +763,7 @@ Backend only flags. These can be set in the config file also.
--qingstor-zone string Zone to connect to --qingstor-zone string Zone to connect to
--quatrix-api-key string API key for accessing Quatrix account --quatrix-api-key string API key for accessing Quatrix account
--quatrix-effective-upload-time string Wanted upload time for one chunk (default "4s") --quatrix-effective-upload-time string Wanted upload time for one chunk (default "4s")
--quatrix-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --quatrix-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--quatrix-hard-delete Delete files permanently rather than putting them into the trash --quatrix-hard-delete Delete files permanently rather than putting them into the trash
--quatrix-host string Host name of Quatrix account --quatrix-host string Host name of Quatrix account
--quatrix-maximal-summary-chunk-size SizeSuffix The maximal summary for all chunks. It should not be less than 'transfers'*'minimal_chunk_size' (default 95.367Mi) --quatrix-maximal-summary-chunk-size SizeSuffix The maximal summary for all chunks. It should not be less than 'transfers'*'minimal_chunk_size' (default 95.367Mi)
@ -734,7 +778,7 @@ Backend only flags. These can be set in the config file also.
--s3-disable-checksum Don't store MD5 checksum with object metadata --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends --s3-disable-http2 Disable usage of http2 for S3 backends
--s3-download-url string Custom endpoint for downloads --s3-download-url string Custom endpoint for downloads
--s3-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot) --s3-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--s3-endpoint string Endpoint for S3 API --s3-endpoint string Endpoint for S3 API
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars) --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars)
--s3-force-path-style If true use path style access if false use virtual hosted style (default true) --s3-force-path-style If true use path style access if false use virtual hosted style (default true)
@ -768,14 +812,16 @@ Backend only flags. These can be set in the config file also.
--s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint
--s3-use-accept-encoding-gzip Accept-Encoding: gzip Whether to send Accept-Encoding: gzip header (default unset) --s3-use-accept-encoding-gzip Accept-Encoding: gzip Whether to send Accept-Encoding: gzip header (default unset)
--s3-use-already-exists Tristate Set if rclone should report BucketAlreadyExists errors on bucket creation (default unset)
--s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset) --s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset)
--s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset)
--s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads --s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads
--s3-v2-auth If true use v2 authentication --s3-v2-auth If true use v2 authentication
--s3-version-at Time Show file versions as they were at the specified time (default off) --s3-version-at Time Show file versions as they were at the specified time (default off)
--s3-versions Include old versions in directory listings --s3-versions Include old versions in directory listings
--seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled) --seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled)
--seafile-create-library Should rclone create a library if it doesn't exist --seafile-create-library Should rclone create a library if it doesn't exist
--seafile-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8) --seafile-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
--seafile-library string Name of the library --seafile-library string Name of the library
--seafile-library-key string Library password (for encrypted libraries only) (obscured) --seafile-library-key string Library password (for encrypted libraries only) (obscured)
--seafile-pass string Password (obscured) --seafile-pass string Password (obscured)
@ -785,6 +831,7 @@ Backend only flags. These can be set in the config file also.
--sftp-chunk-size SizeSuffix Upload and download chunk size (default 32Ki) --sftp-chunk-size SizeSuffix Upload and download chunk size (default 32Ki)
--sftp-ciphers SpaceSepList Space separated list of ciphers to be used for session encryption, ordered by preference --sftp-ciphers SpaceSepList Space separated list of ciphers to be used for session encryption, ordered by preference
--sftp-concurrency int The maximum number of outstanding requests for one file (default 64) --sftp-concurrency int The maximum number of outstanding requests for one file (default 64)
--sftp-copy-is-hardlink Set to enable server side copies using hardlinks
--sftp-disable-concurrent-reads If set don't use concurrent reads --sftp-disable-concurrent-reads If set don't use concurrent reads
--sftp-disable-concurrent-writes If set don't use concurrent writes --sftp-disable-concurrent-writes If set don't use concurrent writes
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available
@ -819,7 +866,7 @@ Backend only flags. These can be set in the config file also.
--sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi) --sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
--sharefile-client-id string OAuth Client Id --sharefile-client-id string OAuth Client Id
--sharefile-client-secret string OAuth Client Secret --sharefile-client-secret string OAuth Client Secret
--sharefile-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot) --sharefile-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
--sharefile-endpoint string Endpoint for API calls --sharefile-endpoint string Endpoint for API calls
--sharefile-root-folder-id string ID of the root folder --sharefile-root-folder-id string ID of the root folder
--sharefile-token string OAuth Access Token as a JSON blob --sharefile-token string OAuth Access Token as a JSON blob
@ -827,12 +874,12 @@ Backend only flags. These can be set in the config file also.
--sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 128Mi) --sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 128Mi)
--sia-api-password string Sia Daemon API Password (obscured) --sia-api-password string Sia Daemon API Password (obscured)
--sia-api-url string Sia daemon API URL, like http://sia.daemon.host:9980 (default "http://127.0.0.1:9980") --sia-api-url string Sia daemon API URL, like http://sia.daemon.host:9980 (default "http://127.0.0.1:9980")
--sia-encoding MultiEncoder The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot) --sia-encoding Encoding The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
--sia-user-agent string Siad User Agent (default "Sia-Agent") --sia-user-agent string Siad User Agent (default "Sia-Agent")
--skip-links Don't warn about skipped symlinks --skip-links Don't warn about skipped symlinks
--smb-case-insensitive Whether the server is configured to be case-insensitive (default true) --smb-case-insensitive Whether the server is configured to be case-insensitive (default true)
--smb-domain string Domain name for NTLM authentication (default "WORKGROUP") --smb-domain string Domain name for NTLM authentication (default "WORKGROUP")
--smb-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot) --smb-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot)
--smb-hide-special-share Hide special shares (e.g. print$) which users aren't supposed to access (default true) --smb-hide-special-share Hide special shares (e.g. print$) which users aren't supposed to access (default true)
--smb-host string SMB server hostname to connect to --smb-host string SMB server hostname to connect to
--smb-idle-timeout Duration Max time before closing idle connections (default 1m0s) --smb-idle-timeout Duration Max time before closing idle connections (default 1m0s)
@ -850,7 +897,7 @@ Backend only flags. These can be set in the config file also.
--sugarsync-authorization string Sugarsync authorization --sugarsync-authorization string Sugarsync authorization
--sugarsync-authorization-expiry string Sugarsync authorization expiry --sugarsync-authorization-expiry string Sugarsync authorization expiry
--sugarsync-deleted-id string Sugarsync deleted folder id --sugarsync-deleted-id string Sugarsync deleted folder id
--sugarsync-encoding MultiEncoder The encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot) --sugarsync-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot)
--sugarsync-hard-delete Permanently delete files if true --sugarsync-hard-delete Permanently delete files if true
--sugarsync-private-access-key string Sugarsync Private Access Key --sugarsync-private-access-key string Sugarsync Private Access Key
--sugarsync-refresh-token string Sugarsync refresh token --sugarsync-refresh-token string Sugarsync refresh token
@ -864,7 +911,7 @@ Backend only flags. These can be set in the config file also.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8) --swift-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form --swift-env-auth Get swift credentials from environment variables in standard OpenStack form
--swift-key string API key or password (OS_PASSWORD) --swift-key string API key or password (OS_PASSWORD)
@ -886,7 +933,7 @@ Backend only flags. These can be set in the config file also.
--union-search-policy string Policy to choose upstream on SEARCH category (default "ff") --union-search-policy string Policy to choose upstream on SEARCH category (default "ff")
--union-upstreams string List of space separated upstreams --union-upstreams string List of space separated upstreams
--uptobox-access-token string Your access token --uptobox-access-token string Your access token
--uptobox-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot) --uptobox-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
--uptobox-private Set to make uploaded files private --uptobox-private Set to make uploaded files private
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon) --webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
--webdav-bearer-token-command string Command to run to get a bearer token --webdav-bearer-token-command string Command to run to get a bearer token
@ -901,14 +948,14 @@ Backend only flags. These can be set in the config file also.
--yandex-auth-url string Auth server URL --yandex-auth-url string Auth server URL
--yandex-client-id string OAuth Client Id --yandex-client-id string OAuth Client Id
--yandex-client-secret string OAuth Client Secret --yandex-client-secret string OAuth Client Secret
--yandex-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot) --yandex-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
--yandex-hard-delete Delete files permanently rather than putting them into the trash --yandex-hard-delete Delete files permanently rather than putting them into the trash
--yandex-token string OAuth Access Token as a JSON blob --yandex-token string OAuth Access Token as a JSON blob
--yandex-token-url string Token server url --yandex-token-url string Token server url
--zoho-auth-url string Auth server URL --zoho-auth-url string Auth server URL
--zoho-client-id string OAuth Client Id --zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret --zoho-client-secret string OAuth Client Secret
--zoho-encoding MultiEncoder The encoding for the backend (default Del,Ctl,InvalidUtf8) --zoho-encoding Encoding The encoding for the backend (default Del,Ctl,InvalidUtf8)
--zoho-region string Zoho region to connect to --zoho-region string Zoho region to connect to
--zoho-token string OAuth Access Token as a JSON blob --zoho-token string OAuth Access Token as a JSON blob
--zoho-token-url string Token server url --zoho-token-url string Token server url

View File

@ -443,7 +443,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_FTP_ENCODING - Env Var: RCLONE_FTP_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,Del,Ctl,RightSpace,Dot - Default: Slash,Del,Ctl,RightSpace,Dot
- Examples: - Examples:
- "Asterisk,Ctl,Dot,Slash" - "Asterisk,Ctl,Dot,Slash"

View File

@ -696,7 +696,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_GCS_ENCODING - Env Var: RCLONE_GCS_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,CrLf,InvalidUtf8,Dot - Default: Slash,CrLf,InvalidUtf8,Dot
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}

View File

@ -374,9 +374,93 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_GPHOTOS_ENCODING - Env Var: RCLONE_GPHOTOS_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,CrLf,InvalidUtf8,Dot - Default: Slash,CrLf,InvalidUtf8,Dot
#### --gphotos-batch-mode
Upload file batching sync|async|off.
This sets the batch mode used by rclone.
This has 3 possible values
- off - no batching
- sync - batch uploads and check completion (default)
- async - batch upload and don't check completion
Rclone will close any outstanding batches when it exits which may make
a delay on quit.
Properties:
- Config: batch_mode
- Env Var: RCLONE_GPHOTOS_BATCH_MODE
- Type: string
- Default: "sync"
#### --gphotos-batch-size
Max number of files in upload batch.
This sets the batch size of files to upload. It has to be less than 50.
By default this is 0 which means rclone which calculate the batch size
depending on the setting of batch_mode.
- batch_mode: async - default batch_size is 50
- batch_mode: sync - default batch_size is the same as --transfers
- batch_mode: off - not in use
Rclone will close any outstanding batches when it exits which may make
a delay on quit.
Setting this is a great idea if you are uploading lots of small files
as it will make them a lot quicker. You can use --transfers 32 to
maximise throughput.
Properties:
- Config: batch_size
- Env Var: RCLONE_GPHOTOS_BATCH_SIZE
- Type: int
- Default: 0
#### --gphotos-batch-timeout
Max time to allow an idle upload batch before uploading.
If an upload batch is idle for more than this long then it will be
uploaded.
The default for this is 0 which means rclone will choose a sensible
default based on the batch_mode in use.
- batch_mode: async - default batch_timeout is 10s
- batch_mode: sync - default batch_timeout is 1s
- batch_mode: off - not in use
Properties:
- Config: batch_timeout
- Env Var: RCLONE_GPHOTOS_BATCH_TIMEOUT
- Type: Duration
- Default: 0s
#### --gphotos-batch-commit-timeout
Max time to wait for a batch to finish committing
Properties:
- Config: batch_commit_timeout
- Env Var: RCLONE_GPHOTOS_BATCH_COMMIT_TIMEOUT
- Type: Duration
- Default: 10m0s
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}
## Limitations ## Limitations

View File

@ -156,16 +156,16 @@ Here are the Standard options specific to hdfs (Hadoop distributed file system).
#### --hdfs-namenode #### --hdfs-namenode
Hadoop name node and port. Hadoop name nodes and ports.
E.g. "namenode:8020" to connect to host namenode at port 8020. E.g. "namenode-1:8020,namenode-2:8020,..." to connect to host namenodes at port 8020.
Properties: Properties:
- Config: namenode - Config: namenode
- Env Var: RCLONE_HDFS_NAMENODE - Env Var: RCLONE_HDFS_NAMENODE
- Type: string - Type: CommaSepList
- Required: true - Default:
#### --hdfs-username #### --hdfs-username
@ -229,7 +229,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_HDFS_ENCODING - Env Var: RCLONE_HDFS_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,Colon,Del,Ctl,InvalidUtf8,Dot - Default: Slash,Colon,Del,Ctl,InvalidUtf8,Dot
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}

View File

@ -415,7 +415,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_HIDRIVE_ENCODING - Env Var: RCLONE_HIDRIVE_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,Dot - Default: Slash,Dot
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}

View File

@ -212,6 +212,46 @@ Properties:
- Type: bool - Type: bool
- Default: false - Default: false
## Backend commands
Here are the commands specific to the http backend.
Run them with
rclone backend COMMAND remote:
The help below will explain what arguments each command takes.
See the [backend](/commands/rclone_backend/) command for more
info on how to pass options and arguments.
These can be run on a running backend using the rc command
[backend/command](/rc/#backend-command).
### set
Set command for updating the config parameters.
rclone backend set remote: [options] [<arguments>+]
This set command can be used to update the config parameters
for a running http backend.
Usage Examples:
rclone backend set remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=remote: -o url=https://example.com
The option keys are named as they are in the config file.
This rebuilds the connection to the http backend when it is called with
the new parameters. Only new parameters need be passed as the values
will default to those currently in use.
It doesn't return anything.
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}
## Limitations ## Limitations

View File

@ -167,6 +167,17 @@ Properties:
- Type: bool - Type: bool
- Default: false - Default: false
#### --imagekit-upload-tags
Tags to add to the uploaded files, e.g. "tag1,tag2".
Properties:
- Config: upload_tags
- Env Var: RCLONE_IMAGEKIT_UPLOAD_TAGS
- Type: string
- Required: false
#### --imagekit-encoding #### --imagekit-encoding
The encoding for the backend. The encoding for the backend.
@ -188,11 +199,11 @@ Here are the possible system metadata items for the imagekit backend.
| Name | Help | Type | Example | Read Only | | Name | Help | Type | Example | Read Only |
|------|------|------|---------|-----------| |------|------|------|---------|-----------|
| aws-tags | AI generated tags by AWS Rekognition associated with the file | string | tag1,tag2 | **Y** | | aws-tags | AI generated tags by AWS Rekognition associated with the image | string | tag1,tag2 | **Y** |
| btime | Time of file birth (creation) read from Last-Modified header | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | **Y** | | btime | Time of file birth (creation) read from Last-Modified header | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | **Y** |
| custom-coordinates | Custom coordinates of the file | string | 0,0,100,100 | **Y** | | custom-coordinates | Custom coordinates of the file | string | 0,0,100,100 | **Y** |
| file-type | Type of the file | string | image | **Y** | | file-type | Type of the file | string | image | **Y** |
| google-tags | AI generated tags by Google Cloud Vision associated with the file | string | tag1,tag2 | **Y** | | google-tags | AI generated tags by Google Cloud Vision associated with the image | string | tag1,tag2 | **Y** |
| has-alpha | Whether the image has alpha channel or not | bool | | **Y** | | has-alpha | Whether the image has alpha channel or not | bool | | **Y** |
| height | Height of the image or video in pixels | int | | **Y** | | height | Height of the image or video in pixels | int | | **Y** |
| is-private-file | Whether the file is private or not | bool | | **Y** | | is-private-file | Whether the file is private or not | bool | | **Y** |

View File

@ -260,7 +260,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_INTERNETARCHIVE_ENCODING - Env Var: RCLONE_INTERNETARCHIVE_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot - Default: Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot
### Metadata ### Metadata

View File

@ -444,9 +444,24 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_JOTTACLOUD_ENCODING - Env Var: RCLONE_JOTTACLOUD_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot
### Metadata
Jottacloud has limited support for metadata, currently an extended set of timestamps.
Here are the possible system metadata items for the jottacloud backend.
| Name | Help | Type | Example | Read Only |
|------|------|------|---------|-----------|
| btime | Time of file birth (creation), read from rclone metadata | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | N |
| content-type | MIME type, also known as media type | string | text/plain | **Y** |
| mtime | Time of last modification, read from rclone metadata | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | N |
| utime | Time of last upload, when current revision was created, generated by backend | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | **Y** |
See the [metadata](/docs/#metadata) docs for more info.
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}
## Limitations ## Limitations

View File

@ -171,34 +171,6 @@ Properties:
- Type: string - Type: string
- Required: true - Required: true
#### --koofr-password
Your password for rclone (generate one at https://storage.rcs-rds.ro/app/admin/preferences/password).
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
Properties:
- Config: password
- Env Var: RCLONE_KOOFR_PASSWORD
- Provider: digistorage
- Type: string
- Required: true
#### --koofr-password
Your password for rclone (generate one at your service's settings page).
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
Properties:
- Config: password
- Env Var: RCLONE_KOOFR_PASSWORD
- Provider: other
- Type: string
- Required: true
### Advanced options ### Advanced options
Here are the Advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers). Here are the Advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).
@ -239,7 +211,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_KOOFR_ENCODING - Env Var: RCLONE_KOOFR_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}

View File

@ -451,6 +451,11 @@ time we:
- Only checksum the size that stat gave - Only checksum the size that stat gave
- Don't update the stat info for the file - Don't update the stat info for the file
**NB** do not use this flag on a Windows Volume Shadow (VSS). For some
unknown reason, files in a VSS sometimes show different sizes from the
directory listing (where the initial stat value comes from on Windows)
and when stat is called on them directly. Other copy tools always use
the direct stat value and setting this flag will disable that.
Properties: Properties:
@ -561,7 +566,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_LOCAL_ENCODING - Env Var: RCLONE_LOCAL_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,Dot - Default: Slash,Dot
### Metadata ### Metadata

View File

@ -409,7 +409,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_MAILRU_ENCODING - Env Var: RCLONE_MAILRU_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}

View File

@ -279,7 +279,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_MEGA_ENCODING - Env Var: RCLONE_MEGA_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,InvalidUtf8,Dot - Default: Slash,InvalidUtf8,Dot
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}

View File

@ -620,6 +620,43 @@ Properties:
- Type: bool - Type: bool
- Default: false - Default: false
#### --onedrive-delta
If set rclone will use delta listing to implement recursive listings.
If this flag is set the the onedrive backend will advertise `ListR`
support for recursive listings.
Setting this flag speeds up these things greatly:
rclone lsf -R onedrive:
rclone size onedrive:
rclone rc vfs/refresh recursive=true
**However** the delta listing API **only** works at the root of the
drive. If you use it not at the root then it recurses from the root
and discards all the data that is not under the directory you asked
for. So it will be correct but may not be very efficient.
This is why this flag is not set as the default.
As a rule of thumb if nearly all of your data is under rclone's root
directory (the `root/directory` in `onedrive:root/directory`) then
using this flag will be be a big performance win. If your data is
mostly not under the root then using this flag will be a big
performance loss.
It is recommended if you are mounting your onedrive at the root
(or near the root when using crypt) and using rclone `rc vfs/refresh`.
Properties:
- Config: delta
- Env Var: RCLONE_ONEDRIVE_DELTA
- Type: bool
- Default: false
#### --onedrive-encoding #### --onedrive-encoding
The encoding for the backend. The encoding for the backend.
@ -630,7 +667,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_ONEDRIVE_ENCODING - Env Var: RCLONE_ONEDRIVE_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}

View File

@ -145,7 +145,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_OPENDRIVE_ENCODING - Env Var: RCLONE_OPENDRIVE_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot
#### --opendrive-chunk-size #### --opendrive-chunk-size

View File

@ -552,7 +552,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_OOS_ENCODING - Env Var: RCLONE_OOS_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,InvalidUtf8,Dot - Default: Slash,InvalidUtf8,Dot
#### --oos-leave-parts-on-error #### --oos-leave-parts-on-error

View File

@ -225,7 +225,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_PCLOUD_ENCODING - Env Var: RCLONE_PCLOUD_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
#### --pcloud-root-folder-id #### --pcloud-root-folder-id

View File

@ -237,7 +237,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_PIKPAK_ENCODING - Env Var: RCLONE_PIKPAK_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot
## Backend commands ## Backend commands

View File

@ -199,7 +199,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_PREMIUMIZEME_ENCODING - Env Var: RCLONE_PREMIUMIZEME_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot - Default: Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}

View File

@ -246,7 +246,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_PROTONDRIVE_ENCODING - Env Var: RCLONE_PROTONDRIVE_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot - Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot
#### --protondrive-original-file-size #### --protondrive-original-file-size

View File

@ -196,7 +196,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_PUTIO_ENCODING - Env Var: RCLONE_PUTIO_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}

View File

@ -307,7 +307,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_QINGSTOR_ENCODING - Env Var: RCLONE_QINGSTOR_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,Ctl,InvalidUtf8 - Default: Slash,Ctl,InvalidUtf8
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}

View File

@ -189,7 +189,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_QUATRIX_ENCODING - Env Var: RCLONE_QUATRIX_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
#### --quatrix-effective-upload-time #### --quatrix-effective-upload-time

View File

@ -1173,6 +1173,56 @@ See the [about](/commands/rclone_about/) command for more information on the abo
**Authentication is required for this call.** **Authentication is required for this call.**
### operations/check: check the source and destination are the same {#operations-check}
Checks the files in the source and destination match. It compares
sizes and hashes and logs a report of files that don't
match. It doesn't alter the source or destination.
This takes the following parameters:
- srcFs - a remote name string e.g. "drive:" for the source, "/" for local filesystem
- dstFs - a remote name string e.g. "drive2:" for the destination, "/" for local filesystem
- download - check by downloading rather than with hash
- checkFileHash - treat checkFileFs:checkFileRemote as a SUM file with hashes of given type
- checkFileFs - treat checkFileFs:checkFileRemote as a SUM file with hashes of given type
- checkFileRemote - treat checkFileFs:checkFileRemote as a SUM file with hashes of given type
- oneWay - check one way only, source files must exist on remote
- combined - make a combined report of changes (default false)
- missingOnSrc - report all files missing from the source (default true)
- missingOnDst - report all files missing from the destination (default true)
- match - report all matching files (default false)
- differ - report all non-matching files (default true)
- error - report all files with errors (hashing or reading) (default true)
If you supply the download flag, it will download the data from
both remotes and check them against each other on the fly. This can
be useful for remotes that don't support hashes or if you really want
to check all the data.
If you supply the size-only global flag, it will only compare the sizes not
the hashes as well. Use this for a quick check.
If you supply the checkFileHash option with a valid hash name, the
checkFileFs:checkFileRemote must point to a text file in the SUM
format. This treats the checksum file as the source and dstFs as the
destination. Note that srcFs is not used and should not be supplied in
this case.
Returns:
- success - true if no error, false otherwise
- status - textual summary of check, OK or text string
- hashType - hash used in check, may be missing
- combined - array of strings of combined report of changes
- missingOnSrc - array of strings of all files missing from the source
- missingOnDst - array of strings of all files missing from the destination
- match - array of strings of all matching files
- differ - array of strings of all non-matching files
- error - array of strings of all files with errors (hashing or reading)
**Authentication is required for this call.**
### operations/cleanup: Remove trashed files in the remote or path {#operations-cleanup} ### operations/cleanup: Remove trashed files in the remote or path {#operations-cleanup}
This takes the following parameters: This takes the following parameters:

File diff suppressed because it is too large Load Diff

View File

@ -386,7 +386,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_SEAFILE_ENCODING - Env Var: RCLONE_SEAFILE_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8 - Default: Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}

View File

@ -1016,6 +1016,32 @@ Properties:
- Type: string - Type: string
- Required: false - Required: false
#### --sftp-copy-is-hardlink
Set to enable server side copies using hardlinks.
The SFTP protocol does not define a copy command so normally server
side copies are not allowed with the sftp backend.
However the SFTP protocol does support hardlinking, and if you enable
this flag then the sftp backend will support server side copies. These
will be implemented by doing a hardlink from the source to the
destination.
Not all sftp servers support this.
Note that hardlinking two files together will use no additional space
as the source and the destination will be the same file.
This feature may be useful backups made with --copy-dest.
Properties:
- Config: copy_is_hardlink
- Env Var: RCLONE_SFTP_COPY_IS_HARDLINK
- Type: bool
- Default: false
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}
## Limitations ## Limitations

View File

@ -300,7 +300,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_SHAREFILE_ENCODING - Env Var: RCLONE_SHAREFILE_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}

View File

@ -191,7 +191,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_SIA_ENCODING - Env Var: RCLONE_SIA_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot - Default: Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}

View File

@ -245,7 +245,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_SMB_ENCODING - Env Var: RCLONE_SMB_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}

View File

@ -269,7 +269,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_SUGARSYNC_ENCODING - Env Var: RCLONE_SUGARSYNC_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,Ctl,InvalidUtf8,Dot - Default: Slash,Ctl,InvalidUtf8,Dot
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}

View File

@ -584,7 +584,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_SWIFT_ENCODING - Env Var: RCLONE_SWIFT_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,InvalidUtf8 - Default: Slash,InvalidUtf8
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}

View File

@ -143,7 +143,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_UPTOBOX_ENCODING - Env Var: RCLONE_UPTOBOX_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot - Default: Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}

View File

@ -151,8 +151,8 @@ Properties:
- Sharepoint Online, authenticated by Microsoft account - Sharepoint Online, authenticated by Microsoft account
- "sharepoint-ntlm" - "sharepoint-ntlm"
- Sharepoint with NTLM authentication, usually self-hosted or on-premises - Sharepoint with NTLM authentication, usually self-hosted or on-premises
- "rclone", - "rclone"
- rclone WebDAV server to serve a remote over HTTP via the WebDAV protocol, - rclone WebDAV server to serve a remote over HTTP via the WebDAV protocol
- "other" - "other"
- Other site/service or software - Other site/service or software

View File

@ -206,7 +206,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_YANDEX_ENCODING - Env Var: RCLONE_YANDEX_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Slash,Del,Ctl,InvalidUtf8,Dot - Default: Slash,Del,Ctl,InvalidUtf8,Dot
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}

View File

@ -234,7 +234,7 @@ Properties:
- Config: encoding - Config: encoding
- Env Var: RCLONE_ZOHO_ENCODING - Env Var: RCLONE_ZOHO_ENCODING
- Type: MultiEncoder - Type: Encoding
- Default: Del,Ctl,InvalidUtf8 - Default: Del,Ctl,InvalidUtf8
{{< rem autogenerated options stop >}} {{< rem autogenerated options stop >}}

8079
rclone.1 generated

File diff suppressed because it is too large Load Diff